Test Report: Docker_Linux 21550

                    
                      0aba0a8e31d541259ffdeb45c9650281430067b8:2025-09-17:41464
                    
                

Test fail (14/328)

x
+
TestMultiControlPlane/serial/DeployApp (106.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 kubectl -- rollout status deployment/busybox: (3.904621326s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:01:39.560963  665399 retry.go:31] will retry after 1.383823161s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:01:41.063454  665399 retry.go:31] will retry after 842.947957ms: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:01:42.026518  665399 retry.go:31] will retry after 2.86026853s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:01:45.009554  665399 retry.go:31] will retry after 3.226009259s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:01:48.353043  665399 retry.go:31] will retry after 4.239694799s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:01:52.715823  665399 retry.go:31] will retry after 4.042048106s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0917 00:01:54.644082  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:01:56.879154  665399 retry.go:31] will retry after 13.503157036s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:02:10.513619  665399 retry.go:31] will retry after 15.518470945s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:02:26.156426  665399 retry.go:31] will retry after 21.168449959s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0917 00:02:35.606769  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0917 00:02:47.444067  665399 retry.go:31] will retry after 30.971578059s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.io: exit status 1 (167.233028ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7b57f96db7-l2jn5 could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default: exit status 1 (163.292311ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7b57f96db7-l2jn5 could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (162.762104ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7b57f96db7-l2jn5 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:57:02.530585618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6698b0ad85a9078b37114c4e66646c6dc7a67a706d28557d80b29fea1d15d512",
	            "SandboxKey": "/var/run/docker/netns/6698b0ad85a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:eb:f5:3a:ee:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "669cb4f772890bad35a4ad4cdb1934f42912d7e03fc353fd08c3e3a046cfba54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.026758806s)
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p functional-650494                                                                                              │ functional-650494 │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ start   │ ha-198834 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker │ ha-198834         │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                  │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- rollout status deployment/busybox                                                            │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                             │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.io                                      │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.io                                      │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.io                                      │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default                                 │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default                                 │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default                                 │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default.svc.cluster.local               │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default.svc.cluster.local               │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default.svc.cluster.local               │ ha-198834         │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:58.042095  722351 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:58.042245  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042257  722351 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:58.042263  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042455  722351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:58.043028  722351 out.go:368] Setting JSON to false
	I0916 23:56:58.043951  722351 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9550,"bootTime":1758057468,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:58.044043  722351 start.go:140] virtualization: kvm guest
	I0916 23:56:58.045935  722351 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:58.047229  722351 notify.go:220] Checking for updates...
	I0916 23:56:58.047241  722351 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:58.048693  722351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:58.049858  722351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:58.051172  722351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:58.052335  722351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:58.053390  722351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:58.054603  722351 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:58.077260  722351 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:58.077444  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.132853  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.122248025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.132998  722351 docker.go:318] overlay module found
	I0916 23:56:58.135611  722351 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:58.136750  722351 start.go:304] selected driver: docker
	I0916 23:56:58.136770  722351 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:58.136782  722351 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:58.137364  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.190249  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.179811473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.190455  722351 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:58.190736  722351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:58.192641  722351 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:58.193978  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:56:58.194069  722351 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:58.194094  722351 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:58.194188  722351 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:58.195605  722351 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0916 23:56:58.196688  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:56:58.197669  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:58.198952  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.199018  722351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0916 23:56:58.199034  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:58.199064  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:58.199149  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:58.199167  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:56:58.199618  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:56:58.199650  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json: {Name:mkfd30616e0167206552e80675557cfcc4fee172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:58.218451  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:58.218470  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:58.218487  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:58.218525  722351 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:58.218643  722351 start.go:364] duration metric: took 94.227µs to acquireMachinesLock for "ha-198834"
	I0916 23:56:58.218683  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:56:58.218779  722351 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:58.220943  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:58.221292  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:56:58.221335  722351 client.go:168] LocalClient.Create starting
	I0916 23:56:58.221405  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:56:58.221441  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221461  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221543  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:56:58.221570  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221588  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221956  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:58.238665  722351 cli_runner.go:211] docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:58.238743  722351 network_create.go:284] running [docker network inspect ha-198834] to gather additional debugging logs...
	I0916 23:56:58.238769  722351 cli_runner.go:164] Run: docker network inspect ha-198834
	W0916 23:56:58.254999  722351 cli_runner.go:211] docker network inspect ha-198834 returned with exit code 1
	I0916 23:56:58.255086  722351 network_create.go:287] error running [docker network inspect ha-198834]: docker network inspect ha-198834: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834 not found
	I0916 23:56:58.255122  722351 network_create.go:289] output of [docker network inspect ha-198834]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834 not found
	
	** /stderr **
	I0916 23:56:58.255285  722351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:58.272422  722351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b56820}
	I0916 23:56:58.272473  722351 network_create.go:124] attempt to create docker network ha-198834 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:58.272524  722351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-198834 ha-198834
	I0916 23:56:58.332062  722351 network_create.go:108] docker network ha-198834 192.168.49.0/24 created
	I0916 23:56:58.332109  722351 kic.go:121] calculated static IP "192.168.49.2" for the "ha-198834" container
	I0916 23:56:58.332180  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:58.347722  722351 cli_runner.go:164] Run: docker volume create ha-198834 --label name.minikube.sigs.k8s.io=ha-198834 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:58.365722  722351 oci.go:103] Successfully created a docker volume ha-198834
	I0916 23:56:58.365811  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --entrypoint /usr/bin/test -v ha-198834:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:58.752716  722351 oci.go:107] Successfully prepared a docker volume ha-198834
	I0916 23:56:58.752766  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.752791  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:58.752860  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:02.431811  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.678879308s)
	I0916 23:57:02.431852  722351 kic.go:203] duration metric: took 3.679056906s to extract preloaded images to volume ...
	W0916 23:57:02.431981  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:02.432030  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:02.432094  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:02.483868  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834 --name ha-198834 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834 --network ha-198834 --ip 192.168.49.2 --volume ha-198834:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:02.749244  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Running}}
	I0916 23:57:02.769059  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:02.787342  722351 cli_runner.go:164] Run: docker exec ha-198834 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:02.836161  722351 oci.go:144] the created container "ha-198834" has a running status.
	I0916 23:57:02.836195  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa...
	I0916 23:57:03.023198  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:03.023332  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:03.051071  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.071057  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:03.071081  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:03.121506  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.138447  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:03.138553  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.156407  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.156657  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.156674  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:03.295893  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.295938  722351 ubuntu.go:182] provisioning hostname "ha-198834"
	I0916 23:57:03.296023  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.314748  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.314993  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.315008  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0916 23:57:03.463642  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.463716  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.480946  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.481224  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.481264  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:03.616528  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:03.616561  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:03.616587  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:03.616603  722351 provision.go:84] configureAuth start
	I0916 23:57:03.616666  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:03.633505  722351 provision.go:143] copyHostCerts
	I0916 23:57:03.633553  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633590  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:03.633601  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633689  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:03.633796  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633824  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:03.633834  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633870  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:03.633969  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.633996  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:03.634007  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.634050  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:03.634188  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0916 23:57:03.786555  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:03.786617  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:03.786691  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.804115  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:03.900955  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:03.901014  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:57:03.928655  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:03.928721  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:03.953468  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:03.953537  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:03.978330  722351 provision.go:87] duration metric: took 361.708211ms to configureAuth
	I0916 23:57:03.978356  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:03.978536  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:03.978599  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.995700  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.995934  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.995954  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:04.131514  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:04.131541  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:04.131675  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:04.131752  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.148752  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.148996  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.149060  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:04.298185  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:04.298270  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.315091  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.315309  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.315326  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:05.420254  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:04.295122578 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:05.420296  722351 machine.go:96] duration metric: took 2.281822221s to provisionDockerMachine
	I0916 23:57:05.420315  722351 client.go:171] duration metric: took 7.198967751s to LocalClient.Create
	I0916 23:57:05.420340  722351 start.go:167] duration metric: took 7.199048943s to libmachine.API.Create "ha-198834"
	I0916 23:57:05.420350  722351 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0916 23:57:05.420364  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:05.420443  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:05.420495  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.437726  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.536164  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:05.539580  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:05.539616  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:05.539633  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:05.539639  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:05.539653  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:05.539713  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:05.539819  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:05.539836  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:05.540001  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:05.548691  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:05.575226  722351 start.go:296] duration metric: took 154.859714ms for postStartSetup
	I0916 23:57:05.575586  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.591876  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:05.592351  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:05.592412  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.609076  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.701881  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:05.706378  722351 start.go:128] duration metric: took 7.487581015s to createHost
	I0916 23:57:05.706400  722351 start.go:83] releasing machines lock for "ha-198834", held for 7.487744986s
	I0916 23:57:05.706457  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.723047  722351 ssh_runner.go:195] Run: cat /version.json
	I0916 23:57:05.723106  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.723117  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:05.723202  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.739830  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.739978  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.900291  722351 ssh_runner.go:195] Run: systemctl --version
	I0916 23:57:05.905029  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:05.909440  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:05.939050  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:05.939153  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:05.968631  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:05.968659  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:05.968693  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:05.968830  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:05.985490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:05.997349  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:06.007949  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:06.008036  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:06.018490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.028804  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:06.039330  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.049816  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:06.059493  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:06.069825  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:06.080461  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:06.091039  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:06.100019  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:06.109126  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.178675  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:06.251706  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:06.251760  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:06.251809  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:06.264383  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.275792  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:06.294666  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.306227  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:06.317564  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:06.334759  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:06.338327  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:06.348543  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:06.366680  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:06.432452  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:06.496386  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:06.496496  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:06.515617  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:06.527317  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.590441  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:07.360810  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:07.372759  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:07.384493  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.396808  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:07.466973  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:07.538629  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.607976  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:07.630119  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:07.642121  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.709050  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:07.784177  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.797686  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:07.797763  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:07.801576  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:07.801630  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:07.804977  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:07.837851  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:07.837957  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.862098  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.888678  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:07.888755  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:07.905526  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:07.909605  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:07.921677  722351 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:57:07.921793  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:07.921842  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.943020  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.943041  722351 docker.go:621] Images already preloaded, skipping extraction
	I0916 23:57:07.943097  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.963583  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.963609  722351 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:57:07.963623  722351 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0916 23:57:07.963750  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:07.963822  722351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 23:57:08.012977  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:08.013007  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:08.013021  722351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:57:08.013044  722351 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:57:08.013180  722351 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:57:08.013203  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:08.013244  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:08.026529  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:08.026652  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:08.026716  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:08.036301  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:08.036379  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:57:08.046128  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 23:57:08.064738  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:08.083216  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:57:08.101114  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:57:08.121332  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:08.125035  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:08.136734  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:08.207460  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:08.231438  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0916 23:57:08.231468  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:08.231491  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.231634  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:08.231682  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:08.231692  722351 certs.go:256] generating profile certs ...
	I0916 23:57:08.231748  722351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:08.231761  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt with IP's: []
	I0916 23:57:08.595971  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt ...
	I0916 23:57:08.596008  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt: {Name:mk045c8005e18afdd173496398fb640e85421530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596237  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key ...
	I0916 23:57:08.596255  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key: {Name:mkec7f349d5172bad8ab50dce27926cf4a2810b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596372  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28
	I0916 23:57:08.596390  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:57:08.930707  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 ...
	I0916 23:57:08.930740  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28: {Name:mke8743bf1c0faa0b20cb0336c0e1879fcb77e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.930956  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 ...
	I0916 23:57:08.930975  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28: {Name:mkd63d446f2fe51bc154cd1e5df7f39c484f911b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.931094  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:08.931221  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:08.931283  722351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:08.931298  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt with IP's: []
	I0916 23:57:09.286083  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt ...
	I0916 23:57:09.286118  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt: {Name:mk7d8f9e6931aff0b35e5110e6bb582a3f00c824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286322  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key ...
	I0916 23:57:09.286339  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key: {Name:mkaeef389ff7f9a0b6729cce56a45b0b3aa13296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286448  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:09.286467  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:09.286479  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:09.286489  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:09.286513  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:09.286527  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:09.286538  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:09.286550  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:09.286602  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:09.286641  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:09.286650  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:09.286674  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:09.286702  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:09.286730  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:09.286767  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:09.286792  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.286805  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.286817  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.287381  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:09.312982  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:09.337940  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:09.362347  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:09.386557  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:57:09.412140  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:09.436893  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:09.461871  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:09.487876  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:09.516060  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:09.541440  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:09.567069  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:57:09.585649  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:09.591504  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:09.602004  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605727  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605791  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.612679  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:09.622556  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:09.632414  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636379  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636441  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.643659  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:09.653893  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:09.663837  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667554  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667899  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.675833  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:09.686032  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:09.689851  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:09.689923  722351 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:09.690062  722351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 23:57:09.708774  722351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:57:09.718368  722351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:57:09.727825  722351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:57:09.727888  722351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:57:09.738106  722351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:57:09.738126  722351 kubeadm.go:157] found existing configuration files:
	
	I0916 23:57:09.738165  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:57:09.747962  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:57:09.748017  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:57:09.757385  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:57:09.766772  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:57:09.766839  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:57:09.775735  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.784848  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:57:09.784955  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.793751  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:57:09.803170  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:57:09.803229  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:57:09.811944  722351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:57:09.867145  722351 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:57:09.919246  722351 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:57:19.614241  722351 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:57:19.614308  722351 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:57:19.614466  722351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:57:19.614561  722351 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:57:19.614607  722351 kubeadm.go:310] OS: Linux
	I0916 23:57:19.614692  722351 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:57:19.614771  722351 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:57:19.614837  722351 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:57:19.614899  722351 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:57:19.614977  722351 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:57:19.615057  722351 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:57:19.615125  722351 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:57:19.615202  722351 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:57:19.615307  722351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:57:19.615454  722351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:57:19.615594  722351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:57:19.615688  722351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:57:19.618162  722351 out.go:252]   - Generating certificates and keys ...
	I0916 23:57:19.618260  722351 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:57:19.618349  722351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:57:19.618445  722351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:57:19.618533  722351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:57:19.618635  722351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:57:19.618717  722351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:57:19.618792  722351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:57:19.618993  722351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619071  722351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:57:19.619249  722351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619335  722351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:57:19.619434  722351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:57:19.619517  722351 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:57:19.619599  722351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:57:19.619679  722351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:57:19.619763  722351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:57:19.619846  722351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:57:19.619990  722351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:57:19.620069  722351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:57:19.620183  722351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:57:19.620281  722351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:57:19.621487  722351 out.go:252]   - Booting up control plane ...
	I0916 23:57:19.621595  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:57:19.621704  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:57:19.621799  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:57:19.621956  722351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:57:19.622047  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:57:19.622137  722351 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:57:19.622213  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:57:19.622246  722351 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:57:19.622371  722351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:57:19.622503  722351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:57:19.622564  722351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000941296s
	I0916 23:57:19.622663  722351 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:57:19.622778  722351 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:57:19.622893  722351 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:57:19.623021  722351 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:57:19.623126  722351 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.545161134s
	I0916 23:57:19.623210  722351 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.1638517s
	I0916 23:57:19.623273  722351 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001738286s
	I0916 23:57:19.623369  722351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:57:19.623478  722351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:57:19.623551  722351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:57:19.623792  722351 kubeadm.go:310] [mark-control-plane] Marking the node ha-198834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:57:19.623845  722351 kubeadm.go:310] [bootstrap-token] Using token: wg2on6.splp3qzu9xv61vdp
	I0916 23:57:19.625599  722351 out.go:252]   - Configuring RBAC rules ...
	I0916 23:57:19.625697  722351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:57:19.625769  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:57:19.625966  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:57:19.626123  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:57:19.626261  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:57:19.626367  722351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:57:19.626473  722351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:57:19.626522  722351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:57:19.626564  722351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:57:19.626570  722351 kubeadm.go:310] 
	I0916 23:57:19.626631  722351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:57:19.626643  722351 kubeadm.go:310] 
	I0916 23:57:19.626737  722351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:57:19.626747  722351 kubeadm.go:310] 
	I0916 23:57:19.626781  722351 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:57:19.626863  722351 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:57:19.626960  722351 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:57:19.626973  722351 kubeadm.go:310] 
	I0916 23:57:19.627050  722351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:57:19.627058  722351 kubeadm.go:310] 
	I0916 23:57:19.627113  722351 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:57:19.627119  722351 kubeadm.go:310] 
	I0916 23:57:19.627167  722351 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:57:19.627238  722351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:57:19.627297  722351 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:57:19.627302  722351 kubeadm.go:310] 
	I0916 23:57:19.627381  722351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:57:19.627449  722351 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:57:19.627454  722351 kubeadm.go:310] 
	I0916 23:57:19.627525  722351 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627618  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0916 23:57:19.627647  722351 kubeadm.go:310] 	--control-plane 
	I0916 23:57:19.627653  722351 kubeadm.go:310] 
	I0916 23:57:19.627725  722351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:57:19.627733  722351 kubeadm.go:310] 
	I0916 23:57:19.627801  722351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627921  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0916 23:57:19.627933  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:19.627939  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:19.630017  722351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:57:19.631017  722351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:57:19.635194  722351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:57:19.635211  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:57:19.655634  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:57:19.855102  722351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:57:19.855186  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:19.855265  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834 minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=true
	I0916 23:57:19.863538  722351 ops.go:34] apiserver oom_adj: -16
	I0916 23:57:19.931275  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.432025  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.932100  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.432105  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.932376  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.432213  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.931583  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.431392  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.932193  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.431927  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.504799  722351 kubeadm.go:1105] duration metric: took 4.649687278s to wait for elevateKubeSystemPrivileges
	I0916 23:57:24.504835  722351 kubeadm.go:394] duration metric: took 14.81493092s to StartCluster
	I0916 23:57:24.504858  722351 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.504967  722351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:57:24.505808  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.506080  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:57:24.506079  722351 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:24.506102  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.506120  722351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:57:24.506215  722351 addons.go:69] Setting storage-provisioner=true in profile "ha-198834"
	I0916 23:57:24.506241  722351 addons.go:238] Setting addon storage-provisioner=true in "ha-198834"
	I0916 23:57:24.506236  722351 addons.go:69] Setting default-storageclass=true in profile "ha-198834"
	I0916 23:57:24.506263  722351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198834"
	I0916 23:57:24.506271  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.506311  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:24.506630  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.506797  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.527476  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:24.528010  722351 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:57:24.528028  722351 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:57:24.528032  722351 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:57:24.528036  722351 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:57:24.528039  722351 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:57:24.528105  722351 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:57:24.528384  722351 addons.go:238] Setting addon default-storageclass=true in "ha-198834"
	I0916 23:57:24.528420  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.528683  722351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:57:24.528891  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.530050  722351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.530067  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:57:24.530109  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.548463  722351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.548490  722351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:57:24.548552  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.551711  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.575963  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.622716  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:57:24.680948  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.725959  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.815565  722351 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:57:25.027949  722351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:57:25.029176  722351 addons.go:514] duration metric: took 523.059617ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:57:25.029216  722351 start.go:246] waiting for cluster config update ...
	I0916 23:57:25.029233  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:25.030834  722351 out.go:203] 
	I0916 23:57:25.032180  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:25.032246  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.033846  722351 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0916 23:57:25.035651  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:25.036699  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:25.038502  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.038524  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:25.038599  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:25.038624  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:25.038635  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:25.038696  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.064556  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:25.064575  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:25.064593  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:25.064625  722351 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:25.064737  722351 start.go:364] duration metric: took 87.928µs to acquireMachinesLock for "ha-198834-m02"
	I0916 23:57:25.064767  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:25.064852  722351 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:57:25.067030  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:25.067261  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:25.067302  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:25.067392  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:25.067435  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067451  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067520  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:25.067544  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067561  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067817  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:25.087287  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0008ae780 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:25.087329  722351 kic.go:121] calculated static IP "192.168.49.3" for the "ha-198834-m02" container
	I0916 23:57:25.087390  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:25.104356  722351 cli_runner.go:164] Run: docker volume create ha-198834-m02 --label name.minikube.sigs.k8s.io=ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:25.128318  722351 oci.go:103] Successfully created a docker volume ha-198834-m02
	I0916 23:57:25.128423  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --entrypoint /usr/bin/test -v ha-198834-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:25.555443  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m02
	I0916 23:57:25.555486  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.555507  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:25.555574  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.769985  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214340138s)
	I0916 23:57:29.770025  722351 kic.go:203] duration metric: took 4.214511914s to extract preloaded images to volume ...
	W0916 23:57:29.770138  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.770180  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.770230  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.831280  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m02 --name ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m02 --network ha-198834 --ip 192.168.49.3 --volume ha-198834-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:30.118263  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Running}}
	I0916 23:57:30.140753  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.161053  722351 cli_runner.go:164] Run: docker exec ha-198834-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:30.204746  722351 oci.go:144] the created container "ha-198834-m02" has a running status.
	I0916 23:57:30.204782  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa...
	I0916 23:57:30.491277  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:30.491341  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:30.523169  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.546155  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:30.546178  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.603616  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.624695  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.624784  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.648569  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.648946  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.648966  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.800750  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.800784  722351 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0916 23:57:30.800873  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.822237  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.822505  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.822519  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0916 23:57:30.984206  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.984307  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.007082  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.007398  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.007430  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:31.152561  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:31.152598  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:31.152624  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:31.152644  722351 provision.go:84] configureAuth start
	I0916 23:57:31.152709  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:31.171931  722351 provision.go:143] copyHostCerts
	I0916 23:57:31.171978  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172008  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:31.172014  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172081  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:31.172159  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172181  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:31.172185  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172216  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:31.172262  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172279  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:31.172287  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172310  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:31.172361  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0916 23:57:31.314068  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:31.314146  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:31.314208  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.336792  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:31.442195  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:31.442269  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:31.472780  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:31.472841  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:31.499569  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:31.499653  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:31.530277  722351 provision.go:87] duration metric: took 377.61476ms to configureAuth
	I0916 23:57:31.530311  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:31.530528  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:31.530587  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.548573  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.548821  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.548841  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:31.695327  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:31.695357  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:31.695559  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:31.695639  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.715926  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.716269  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.716384  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:31.879960  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:31.880054  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.901465  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.901783  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.901817  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:33.107385  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:31.877658246 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:33.107432  722351 machine.go:96] duration metric: took 2.482713737s to provisionDockerMachine
	I0916 23:57:33.107448  722351 client.go:171] duration metric: took 8.040135103s to LocalClient.Create
	I0916 23:57:33.107471  722351 start.go:167] duration metric: took 8.040214449s to libmachine.API.Create "ha-198834"
	I0916 23:57:33.107480  722351 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0916 23:57:33.107493  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:33.107570  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:33.107624  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.129478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.235200  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:33.239799  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:33.239842  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:33.239854  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:33.239862  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:33.239881  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:33.239961  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:33.240070  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:33.240085  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:33.240211  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:33.252619  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:33.291135  722351 start.go:296] duration metric: took 183.636707ms for postStartSetup
	I0916 23:57:33.291600  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.313645  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:33.314041  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:33.314103  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.337314  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.439716  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:33.445408  722351 start.go:128] duration metric: took 8.380530846s to createHost
	I0916 23:57:33.445437  722351 start.go:83] releasing machines lock for "ha-198834-m02", held for 8.380681461s
	I0916 23:57:33.445500  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.469661  722351 out.go:179] * Found network options:
	I0916 23:57:33.471226  722351 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:33.472373  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:33.472429  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:33.472520  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:33.472550  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:33.472570  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.472621  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.495822  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.496478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.601441  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:33.704002  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:33.704085  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:33.742848  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:33.742881  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:33.742929  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:33.743066  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:33.765394  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:33.781702  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:33.796106  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:33.796186  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:33.811490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.825594  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:33.840006  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.853819  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:33.867424  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:33.882022  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:33.896562  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:33.910813  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:33.923436  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:33.936892  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.033978  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:34.137820  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:34.137955  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:34.138026  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:34.154788  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.170769  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:34.190397  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.207526  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:34.224333  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:34.249827  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:34.255532  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:34.270253  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:34.296311  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:34.391517  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:34.486390  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:34.486452  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:34.512957  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:34.529696  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.623612  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:35.389236  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:35.402665  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:35.418828  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.433733  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:35.524509  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:35.615815  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.688879  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:35.713552  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:35.729264  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.818355  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:35.908063  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.921416  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:35.921483  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:35.925600  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:35.925666  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:35.929510  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:35.970926  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:35.971002  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.001052  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.032731  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:36.033881  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:36.035387  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:36.055948  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:36.061767  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:36.076229  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:57:36.076482  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:36.076794  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:36.099199  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:36.099483  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0916 23:57:36.099498  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:36.099514  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.099667  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:36.099721  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:36.099735  722351 certs.go:256] generating profile certs ...
	I0916 23:57:36.099834  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:36.099867  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0916 23:57:36.099889  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:36.171638  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 ...
	I0916 23:57:36.171669  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4: {Name:mk274e4893d598b40c8fed777bc1c7c2e951159a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.171866  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 ...
	I0916 23:57:36.171885  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4: {Name:mkf2a66869f0c345fb28cc9925dc0bb02623a928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.172011  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:36.172195  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:36.172362  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:36.172381  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:36.172396  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:36.172415  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:36.172438  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:36.172457  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:36.172474  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:36.172493  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:36.172512  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:36.172589  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:36.172634  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:36.172648  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:36.172679  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:36.172703  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:36.172736  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:36.172796  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:36.172840  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.172861  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.172878  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.172963  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:36.194873  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:36.286293  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:36.291948  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:36.308150  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:36.312206  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:36.325598  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:36.329618  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:36.346110  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:36.350017  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:36.365628  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:36.369445  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:36.383675  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:36.387388  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:57:36.403394  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:36.432068  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:36.461592  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:36.491261  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:36.523895  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:36.552719  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:36.580284  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:36.608342  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:36.639670  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:36.672003  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:36.703856  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:36.734275  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:36.755638  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:36.777805  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:36.799338  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:36.821463  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:36.843600  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:57:36.867808  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:36.889233  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:36.896091  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:36.908363  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913145  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913212  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.921857  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:36.934186  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:36.945282  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949180  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949249  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.958068  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:36.970160  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:36.981053  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985350  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985410  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.993828  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:37.004616  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:37.008764  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:37.008830  722351 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0916 23:57:37.008961  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:37.008998  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:37.009050  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:37.026582  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:37.026656  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:37.026738  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:37.036867  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:37.036974  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:37.046606  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:57:37.070259  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:37.092325  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:37.116853  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:37.120789  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:37.137396  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:37.223494  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:37.256254  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:37.256574  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:37.256705  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:37.256762  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:37.278264  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:37.435308  722351 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:37.435366  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:54.013635  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.578241326s)
	I0916 23:57:54.013701  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:54.233708  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:57:54.308006  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:54.383356  722351 start.go:319] duration metric: took 17.126777498s to joinCluster
	I0916 23:57:54.383433  722351 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:54.383691  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:54.385020  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:54.386187  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:54.491315  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:54.505328  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:54.505398  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:54.505659  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508947  722351 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0916 23:57:56.508979  722351 node_ready.go:38] duration metric: took 2.003299323s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508998  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:56.509065  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:56.521258  722351 api_server.go:72] duration metric: took 2.137779117s to wait for apiserver process to appear ...
	I0916 23:57:56.521298  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:56.521326  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:56.527086  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:56.528055  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:56.528078  722351 api_server.go:131] duration metric: took 6.77168ms to wait for apiserver health ...
	I0916 23:57:56.528087  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:56.534412  722351 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:56.534478  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.534486  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.534497  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.534503  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.534515  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534524  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.534535  722351 system_pods.go:61] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534541  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.534547  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.534559  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.534564  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.534667  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.534716  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534725  722351 system_pods.go:61] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534731  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.534743  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.534748  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.534753  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.534758  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.534765  722351 system_pods.go:74] duration metric: took 6.672375ms to wait for pod list to return data ...
	I0916 23:57:56.534774  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:56.538351  722351 default_sa.go:45] found service account: "default"
	I0916 23:57:56.538385  722351 default_sa.go:55] duration metric: took 3.603096ms for default service account to be created ...
	I0916 23:57:56.538399  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:56.542274  722351 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:56.542301  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.542307  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.542311  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.542314  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.542321  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542325  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.542330  722351 system_pods.go:89] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542334  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.542338  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.542344  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.542347  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.542351  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.542356  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542367  722351 system_pods.go:89] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542371  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.542375  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.542377  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.542380  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.542384  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.542393  722351 system_pods.go:126] duration metric: took 3.988364ms to wait for k8s-apps to be running ...
	I0916 23:57:56.542403  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:56.542447  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:56.554466  722351 system_svc.go:56] duration metric: took 12.054188ms WaitForService to wait for kubelet
	I0916 23:57:56.554496  722351 kubeadm.go:578] duration metric: took 2.171026353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:56.554519  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:56.557501  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557532  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557552  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557557  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557561  722351 node_conditions.go:105] duration metric: took 3.037317ms to run NodePressure ...
	I0916 23:57:56.557575  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:56.557610  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:56.559549  722351 out.go:203] 
	I0916 23:57:56.561097  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:56.561232  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.562855  722351 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0916 23:57:56.563951  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:56.565051  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:56.566271  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:56.566290  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:56.566373  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:56.566383  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:56.566485  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:56.566581  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.586635  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:56.586656  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:56.586673  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:56.586704  722351 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:56.586811  722351 start.go:364] duration metric: took 87.391µs to acquireMachinesLock for "ha-198834-m03"
	I0916 23:57:56.586843  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:56.587003  722351 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:56.589063  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:56.589158  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:56.589187  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:56.589263  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:56.589299  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589313  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589365  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:56.589385  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589398  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589634  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:56.607248  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc001595440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:56.607297  722351 kic.go:121] calculated static IP "192.168.49.4" for the "ha-198834-m03" container
	I0916 23:57:56.607371  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:56.624198  722351 cli_runner.go:164] Run: docker volume create ha-198834-m03 --label name.minikube.sigs.k8s.io=ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:56.642183  722351 oci.go:103] Successfully created a docker volume ha-198834-m03
	I0916 23:57:56.642258  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --entrypoint /usr/bin/test -v ha-198834-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:57.021785  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m03
	I0916 23:57:57.021834  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:57.021864  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:57.021952  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:59.672995  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.650992477s)
	I0916 23:57:59.673039  722351 kic.go:203] duration metric: took 2.651177157s to extract preloaded images to volume ...
	W0916 23:57:59.673144  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:59.673190  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:59.673255  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:59.730169  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m03 --name ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m03 --network ha-198834 --ip 192.168.49.4 --volume ha-198834-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:58:00.013728  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Running}}
	I0916 23:58:00.034076  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.054832  722351 cli_runner.go:164] Run: docker exec ha-198834-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:58:00.109517  722351 oci.go:144] the created container "ha-198834-m03" has a running status.
	I0916 23:58:00.109546  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa...
	I0916 23:58:00.621029  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:58:00.621097  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:58:00.651614  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.673435  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:58:00.673460  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:58:00.730412  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.749865  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:58:00.750006  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.771445  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.771738  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.771754  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:58:00.920523  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:00.920553  722351 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0916 23:58:00.920616  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.940561  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.940837  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.940853  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0916 23:58:01.103101  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:01.103204  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:01.125182  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:01.125511  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:01.125543  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:58:01.275155  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:01.275201  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:58:01.275231  722351 ubuntu.go:190] setting up certificates
	I0916 23:58:01.275246  722351 provision.go:84] configureAuth start
	I0916 23:58:01.275318  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:01.296305  722351 provision.go:143] copyHostCerts
	I0916 23:58:01.296378  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296426  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:58:01.296439  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296527  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:58:01.296632  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296656  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:58:01.296682  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296726  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:58:01.296788  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296825  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:58:01.296835  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296924  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:58:01.297040  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0916 23:58:02.100987  722351 provision.go:177] copyRemoteCerts
	I0916 23:58:02.101048  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:58:02.101084  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.119475  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:02.218802  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:58:02.218870  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:58:02.251628  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:58:02.251700  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:58:02.279052  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:58:02.279124  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:58:02.305168  722351 provision.go:87] duration metric: took 1.029902032s to configureAuth
	I0916 23:58:02.305208  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:58:02.305440  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:02.305491  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.322139  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.322413  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.322428  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:58:02.459594  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:58:02.459629  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:58:02.459746  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:58:02.459804  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.476657  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.476985  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.477099  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:58:02.633394  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:58:02.633489  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.651145  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.651390  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.651410  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:58:03.800032  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:58:02.631485455 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:58:03.800077  722351 machine.go:96] duration metric: took 3.050188223s to provisionDockerMachine
	I0916 23:58:03.800094  722351 client.go:171] duration metric: took 7.210891992s to LocalClient.Create
	I0916 23:58:03.800121  722351 start.go:167] duration metric: took 7.210962522s to libmachine.API.Create "ha-198834"
	I0916 23:58:03.800131  722351 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0916 23:58:03.800155  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:58:03.800229  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:58:03.800295  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.817949  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:03.918038  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:58:03.922382  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:58:03.922420  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:58:03.922430  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:58:03.922438  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:58:03.922452  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:58:03.922512  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:58:03.922607  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:58:03.922620  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:58:03.922727  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:58:03.932298  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:03.961387  722351 start.go:296] duration metric: took 161.230642ms for postStartSetup
	I0916 23:58:03.961811  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:03.979123  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:58:03.979395  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:58:03.979437  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.997520  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.091253  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:58:04.096537  722351 start.go:128] duration metric: took 7.509514126s to createHost
	I0916 23:58:04.096585  722351 start.go:83] releasing machines lock for "ha-198834-m03", held for 7.509743952s
	I0916 23:58:04.096660  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:04.115702  722351 out.go:179] * Found network options:
	I0916 23:58:04.117029  722351 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:58:04.118232  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118256  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118281  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118299  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:58:04.118395  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:58:04.118441  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.118449  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:58:04.118515  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.136875  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.137594  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.231418  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:58:04.311016  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:58:04.311108  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:58:04.340810  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:58:04.340841  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.340871  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.340997  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.359059  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:58:04.371794  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:58:04.383345  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:58:04.383421  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:58:04.394513  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.405081  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:58:04.415653  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.426510  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:58:04.436405  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:58:04.447135  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:58:04.457926  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:58:04.469563  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:58:04.478599  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:58:04.488307  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:04.557785  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:58:04.636805  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.636855  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.636899  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:58:04.649865  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.662323  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:58:04.680711  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.693319  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:58:04.705665  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.723842  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:58:04.727547  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:58:04.738845  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:58:04.758974  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:58:04.830471  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:58:04.900429  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:58:04.900482  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:58:04.920093  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:58:04.931599  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:05.002855  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:58:05.807532  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:58:05.819728  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:58:05.832303  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:05.844347  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:58:05.916277  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:58:05.988520  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.055206  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:58:06.080490  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:58:06.092817  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.162707  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:58:06.248276  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:06.261931  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:58:06.262000  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:58:06.265868  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:58:06.265941  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:58:06.269385  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:58:06.305058  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:58:06.305139  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.331725  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.358446  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:58:06.359714  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:58:06.360964  722351 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:58:06.362187  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:58:06.379025  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:58:06.383173  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:06.394963  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:58:06.395208  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:06.395415  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:58:06.412700  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:06.412979  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0916 23:58:06.412992  722351 certs.go:194] generating shared ca certs ...
	I0916 23:58:06.413008  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:06.413150  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:58:06.413202  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:58:06.413213  722351 certs.go:256] generating profile certs ...
	I0916 23:58:06.413290  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:58:06.413316  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0916 23:58:06.413331  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:58:07.059616  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 ...
	I0916 23:58:07.059648  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783: {Name:mka6f3e20ae0db98330bce12c7c53c8ceb029f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.059850  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 ...
	I0916 23:58:07.059873  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783: {Name:mk88fba5116449476945068bb066a5fae095ca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.060019  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:58:07.060173  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:58:07.060303  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:58:07.060320  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:58:07.060332  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:58:07.060346  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:58:07.060359  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:58:07.060371  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:58:07.060383  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:58:07.060395  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:58:07.060407  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:58:07.060462  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:58:07.060492  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:58:07.060502  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:58:07.060525  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:58:07.060546  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:58:07.060571  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:58:07.060609  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:07.060634  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.060648  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.060666  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.060725  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:07.077675  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:07.167227  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:58:07.171339  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:58:07.184631  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:58:07.188345  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:58:07.201195  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:58:07.204727  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:58:07.217344  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:58:07.220977  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:58:07.233804  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:58:07.237296  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:58:07.250936  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:58:07.254504  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:58:07.267513  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:58:07.293250  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:58:07.319357  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:58:07.345045  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:58:07.370793  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:58:07.397411  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:58:07.422329  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:58:07.447186  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:58:07.472564  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:58:07.500373  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:58:07.526598  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:58:07.552426  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:58:07.570062  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:58:07.589628  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:58:07.609486  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:58:07.630629  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:58:07.650280  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:58:07.669308  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:58:07.687700  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:58:07.694681  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:58:07.705784  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709662  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709739  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.716649  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:58:07.726290  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:58:07.736118  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740041  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740101  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.747081  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:58:07.757480  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:58:07.767310  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771054  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771114  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.778013  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:58:07.788245  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:58:07.792058  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:58:07.792123  722351 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0916 23:58:07.792232  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:58:07.792263  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:58:07.792307  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:58:07.805180  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:58:07.805247  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:58:07.805296  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:58:07.814610  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:58:07.814678  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:58:07.825352  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:58:07.844047  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:58:07.862757  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:58:07.883848  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:58:07.887562  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:07.899646  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:07.974384  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:08.004718  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:08.005001  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:08.005124  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:58:08.005169  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:08.024622  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:08.169785  722351 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:08.169853  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:58:25.708852  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (17.538975369s)
	I0916 23:58:25.708884  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:58:25.930343  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m03 minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:58:26.006016  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:58:26.089408  722351 start.go:319] duration metric: took 18.084403561s to joinCluster
	I0916 23:58:26.089494  722351 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:26.089805  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:26.091004  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:58:26.092246  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:26.200675  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:26.214424  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:58:26.214506  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:58:26.214713  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	W0916 23:58:28.218137  722351 node_ready.go:57] node "ha-198834-m03" has "Ready":"False" status (will retry)
	I0916 23:58:29.718579  722351 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0916 23:58:29.718621  722351 node_ready.go:38] duration metric: took 3.503891029s for node "ha-198834-m03" to be "Ready" ...
	I0916 23:58:29.718640  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:58:29.718688  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:58:29.730821  722351 api_server.go:72] duration metric: took 3.641289304s to wait for apiserver process to appear ...
	I0916 23:58:29.730847  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:58:29.730870  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:58:29.736447  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:58:29.737363  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:58:29.737382  722351 api_server.go:131] duration metric: took 6.528439ms to wait for apiserver health ...
	I0916 23:58:29.737390  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:58:29.743125  722351 system_pods.go:59] 27 kube-system pods found
	I0916 23:58:29.743154  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.743159  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.743162  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.743166  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.743169  722351 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.743172  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.743179  722351 system_pods.go:61] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743182  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.743189  722351 system_pods.go:61] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743193  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.743198  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.743202  722351 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.743206  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.743209  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.743212  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.743216  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.743220  722351 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743227  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.743231  722351 system_pods.go:61] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743236  722351 system_pods.go:61] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743241  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.743245  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.743248  722351 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.743251  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.743254  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.743257  722351 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.743260  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.743267  722351 system_pods.go:74] duration metric: took 5.871633ms to wait for pod list to return data ...
	I0916 23:58:29.743275  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:58:29.746038  722351 default_sa.go:45] found service account: "default"
	I0916 23:58:29.746059  722351 default_sa.go:55] duration metric: took 2.77496ms for default service account to be created ...
	I0916 23:58:29.746067  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:58:29.751428  722351 system_pods.go:86] 27 kube-system pods found
	I0916 23:58:29.751454  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.751459  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.751463  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.751466  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.751469  722351 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.751472  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.751478  722351 system_pods.go:89] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751482  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.751490  722351 system_pods.go:89] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751494  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.751498  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.751501  722351 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.751504  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.751508  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.751512  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.751515  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.751520  722351 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751526  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.751530  722351 system_pods.go:89] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751535  722351 system_pods.go:89] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751540  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.751545  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.751550  722351 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.751554  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.751558  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.751563  722351 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.751569  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.751577  722351 system_pods.go:126] duration metric: took 5.505301ms to wait for k8s-apps to be running ...
	I0916 23:58:29.751587  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:58:29.751637  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:58:29.764067  722351 system_svc.go:56] duration metric: took 12.467532ms WaitForService to wait for kubelet
	I0916 23:58:29.764102  722351 kubeadm.go:578] duration metric: took 3.674577242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:58:29.764127  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:58:29.767676  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767699  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767712  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767717  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767721  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767724  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767728  722351 node_conditions.go:105] duration metric: took 3.595861ms to run NodePressure ...
	I0916 23:58:29.767739  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:58:29.767761  722351 start.go:255] writing updated cluster config ...
	I0916 23:58:29.768076  722351 ssh_runner.go:195] Run: rm -f paused
	I0916 23:58:29.772054  722351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:29.772528  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:58:29.776391  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781517  722351 pod_ready.go:94] pod "coredns-66bc5c9577-5wx4k" is "Ready"
	I0916 23:58:29.781544  722351 pod_ready.go:86] duration metric: took 5.128752ms for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781552  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.786524  722351 pod_ready.go:94] pod "coredns-66bc5c9577-mjbz6" is "Ready"
	I0916 23:58:29.786549  722351 pod_ready.go:86] duration metric: took 4.991527ms for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.789148  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793593  722351 pod_ready.go:94] pod "etcd-ha-198834" is "Ready"
	I0916 23:58:29.793614  722351 pod_ready.go:86] duration metric: took 4.43654ms for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793622  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797833  722351 pod_ready.go:94] pod "etcd-ha-198834-m02" is "Ready"
	I0916 23:58:29.797856  722351 pod_ready.go:86] duration metric: took 4.228462ms for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797864  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.974055  722351 request.go:683] "Waited before sending request" delay="176.0853ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.173047  722351 request.go:683] "Waited before sending request" delay="193.205885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.373324  722351 request.go:683] "Waited before sending request" delay="74.260595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.573189  722351 request.go:683] "Waited before sending request" delay="196.187075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.973960  722351 request.go:683] "Waited before sending request" delay="171.749825ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.977519  722351 pod_ready.go:94] pod "etcd-ha-198834-m03" is "Ready"
	I0916 23:58:30.977548  722351 pod_ready.go:86] duration metric: took 1.179678858s for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.172996  722351 request.go:683] "Waited before sending request" delay="195.270589ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:58:31.176896  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.373184  722351 request.go:683] "Waited before sending request" delay="196.155083ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834"
	I0916 23:58:31.573091  722351 request.go:683] "Waited before sending request" delay="196.292532ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:31.576254  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834" is "Ready"
	I0916 23:58:31.576280  722351 pod_ready.go:86] duration metric: took 399.33205ms for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.576288  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.773718  722351 request.go:683] "Waited before sending request" delay="197.34633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m02"
	I0916 23:58:31.973716  722351 request.go:683] "Waited before sending request" delay="196.477986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:31.978504  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m02" is "Ready"
	I0916 23:58:31.978555  722351 pod_ready.go:86] duration metric: took 402.258846ms for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.978567  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.172964  722351 request.go:683] "Waited before sending request" delay="194.26238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m03"
	I0916 23:58:32.373491  722351 request.go:683] "Waited before sending request" delay="197.345263ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:32.376525  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m03" is "Ready"
	I0916 23:58:32.376552  722351 pod_ready.go:86] duration metric: took 397.9768ms for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.573017  722351 request.go:683] "Waited before sending request" delay="196.299414ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:58:32.577487  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.773969  722351 request.go:683] "Waited before sending request" delay="196.341624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834"
	I0916 23:58:32.973585  722351 request.go:683] "Waited before sending request" delay="196.346276ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:32.977689  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834" is "Ready"
	I0916 23:58:32.977721  722351 pod_ready.go:86] duration metric: took 400.206125ms for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.977735  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.173032  722351 request.go:683] "Waited before sending request" delay="195.180271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m02"
	I0916 23:58:33.373811  722351 request.go:683] "Waited before sending request" delay="197.350717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:33.376722  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m02" is "Ready"
	I0916 23:58:33.376747  722351 pod_ready.go:86] duration metric: took 399.004052ms for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.376756  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.573048  722351 request.go:683] "Waited before sending request" delay="196.186349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m03"
	I0916 23:58:33.773733  722351 request.go:683] "Waited before sending request" delay="197.347012ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:33.776944  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m03" is "Ready"
	I0916 23:58:33.776972  722351 pod_ready.go:86] duration metric: took 400.209131ms for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.973425  722351 request.go:683] "Waited before sending request" delay="196.344301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:58:33.977203  722351 pod_ready.go:83] waiting for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.173688  722351 request.go:683] "Waited before sending request" delay="196.345801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tkhn"
	I0916 23:58:34.373026  722351 request.go:683] "Waited before sending request" delay="196.256084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:34.376079  722351 pod_ready.go:94] pod "kube-proxy-5tkhn" is "Ready"
	I0916 23:58:34.376106  722351 pod_ready.go:86] duration metric: took 398.875647ms for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.376114  722351 pod_ready.go:83] waiting for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.573402  722351 request.go:683] "Waited before sending request" delay="197.174223ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:34.773022  722351 request.go:683] "Waited before sending request" delay="196.289258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:34.973958  722351 request.go:683] "Waited before sending request" delay="97.260541ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:35.173637  722351 request.go:683] "Waited before sending request" delay="196.407064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.573487  722351 request.go:683] "Waited before sending request" delay="193.254271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.973307  722351 request.go:683] "Waited before sending request" delay="93.259111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	W0916 23:58:36.383328  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:38.882062  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:40.882520  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:42.883194  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:45.382843  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:47.882744  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:49.882993  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:51.883265  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:54.383005  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:56.882555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:59.382463  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:01.382897  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:03.883583  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:06.382581  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:08.882275  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:11.382224  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:13.382333  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:15.882727  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:18.383800  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:20.882547  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:22.883081  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:25.383627  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:27.882377  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:29.882787  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:31.884042  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:34.382932  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:36.882730  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:38.882959  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:40.883411  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:43.382771  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:45.882938  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:48.381607  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:50.382229  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:52.382889  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:54.882546  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:56.882802  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:58.882939  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:00.883550  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:03.382872  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:05.383021  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:07.384166  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:09.883064  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:11.884141  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:14.383248  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:16.883441  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:18.884438  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:21.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:23.883713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:26.383093  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:28.883552  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:31.383392  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:33.883626  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:35.883823  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:38.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:40.883430  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:43.383026  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:45.883091  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:48.382865  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:50.882713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:52.882989  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:55.383076  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:57.383555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:59.882704  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:01.883495  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:04.382406  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:06.383424  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:08.883456  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:11.382988  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:13.882379  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:15.883651  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:18.382551  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:20.382997  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:22.882943  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:24.883256  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:27.383660  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:29.882955  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:32.383364  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	I0917 00:01:34.382530  722351 pod_ready.go:94] pod "kube-proxy-d8brp" is "Ready"
	I0917 00:01:34.382562  722351 pod_ready.go:86] duration metric: took 3m0.006439942s for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.382572  722351 pod_ready.go:83] waiting for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.387645  722351 pod_ready.go:94] pod "kube-proxy-h2fxd" is "Ready"
	I0917 00:01:34.387677  722351 pod_ready.go:86] duration metric: took 5.098826ms for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.390707  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396086  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834" is "Ready"
	I0917 00:01:34.396115  722351 pod_ready.go:86] duration metric: took 5.379692ms for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396126  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400646  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m02" is "Ready"
	I0917 00:01:34.400670  722351 pod_ready.go:86] duration metric: took 4.536355ms for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400680  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.577209  722351 request.go:683] "Waited before sending request" delay="174.117357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0917 00:01:34.580767  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m03" is "Ready"
	I0917 00:01:34.580796  722351 pod_ready.go:86] duration metric: took 180.109317ms for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.580808  722351 pod_ready.go:40] duration metric: took 3m4.808720134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:34.629691  722351 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:01:34.631405  722351 out.go:179] * Done! kubectl is now configured to use "ha-198834" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50aecbe9f874a63c5159d55af06211bca7903e623f01f1e603f267caaf6da9a7/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.259744438Z" level=info msg="ignoring event" container=fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.275867775Z" level=info msg="ignoring event" container=64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.320870537Z" level=info msg="ignoring event" container=310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.336829292Z" level=info msg="ignoring event" container=a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687384709Z" level=info msg="ignoring event" container=11889e34950f849cf7805c6d56f1957ad9d5af727f4810f2da728671398b9f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687719889Z" level=info msg="ignoring event" container=1ccdf9f33d5601763297f230a2f6e51620db2ed183e9f4b9179f4ccef579dfac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756623723Z" level=info msg="ignoring event" container=bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756673284Z" level=info msg="ignoring event" container=870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:01:36 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:01:37 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:37Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         5 minutes ago        Running             coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         5 minutes ago        Running             coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	1ccdf9f33d560       52546a367cc9e                                                                                         5 minutes ago        Exited              coredns                   1                   bf6d6b59f2413       coredns-66bc5c9577-mjbz6
	11889e34950f8       52546a367cc9e                                                                                         5 minutes ago        Exited              coredns                   1                   870758f308362       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              5 minutes ago        Running             kindnet-cni               0                   f541f878be896       kindnet-h28vp
	b16ddbbc469c5       6e38f40d628db                                                                                         5 minutes ago        Running             storage-provisioner       0                   50aecbe9f874a       storage-provisioner
	2da683f529549       df0860106674d                                                                                         5 minutes ago        Running             kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	8a32665f7e3e4       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     6 minutes ago        Running             kube-vip                  0                   5e4aed7a38e18       kube-vip-ha-198834
	4f536df8f44eb       a0af72f2ec6d6                                                                                         6 minutes ago        Running             kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         6 minutes ago        Running             kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         6 minutes ago        Running             etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         6 minutes ago        Running             kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [11889e34950f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50107 - 45856 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000165011s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50484 - 7509 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000096464s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [1ccdf9f33d56] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49262 - 38359 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000112146s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:51442 - 41164 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000125545s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	
	
	==> coredns [f4f7ea59034e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3525bf030f0d49c1ab057441433c477c
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m57s
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m57s
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m3s
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m57s
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m56s  kube-proxy       
	  Normal  Starting                 6m3s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m3s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m3s   kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s   kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s   kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m58s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m29s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           4m58s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 35caf7934a824e33949ce426f7316bfd
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m25s
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m28s
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m21s  kube-proxy       
	  Normal  RegisteredNode  5m24s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  5m23s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  4m58s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	Name:               ha-198834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-198834-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4e7dc065e4fa49595825994457b8e
	  System UUID:                6f810798-3461-44d1-91c3-d55b483ec842
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l2jn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 etcd-ha-198834-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m52s
	  kube-system                 kindnet-67fn9                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m47s
	  kube-system                 kube-apiserver-ha-198834-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-controller-manager-ha-198834-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-d8brp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-ha-198834-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-vip-ha-198834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  4m54s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  4m53s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  4m53s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"info","ts":"2025-09-16T23:58:12.665306Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.670540Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-16T23:58:12.671162Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.670991Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":1384448,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:58:12.677546Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":686,"remote-peer-id":"b3d041dbb5a11c89","bytes":1393601,"size":"1.4 MB"}
	{"level":"warn","ts":"2025-09-16T23:58:12.688158Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:58:12.688674Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:58:12.699050Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:58:12.699094Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.699108Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702028Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:58:12.702080Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702094Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.733438Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.736369Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-16T23:58:12.759123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:34222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:58:12.760774Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(5981864578030751937 12593026477526642892 12956928539845794953)"}
	{"level":"info","ts":"2025-09-16T23:58:12.760967Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.761007Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:19.991223Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:25.496900Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:30.072550Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:32.068856Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:40.123997Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:42.678047Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","bytes":1393601,"size":"1.4 MB","took":"30.013494343s"}
	
	
	==> kernel <==
	 00:03:21 up  2:45,  0 users,  load average: 2.29, 1.39, 1.12
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:02:40.420211       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:02:50.424337       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:02:50.424386       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:02:50.424593       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:02:50.424610       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:02:50.424745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:50.424758       1 main.go:301] handling current node
	I0917 00:03:00.418533       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:00.418581       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:00.418801       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:00.418814       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:00.418930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:00.418942       1 main.go:301] handling current node
	I0917 00:03:10.423193       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:10.423225       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:10.423436       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:10.423448       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:10.423551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:10.423559       1 main.go:301] handling current node
	I0917 00:03:20.423023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:20.423063       1 main.go:301] handling current node
	I0917 00:03:20.423080       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:20.423085       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:20.423378       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:20.423393       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0916 23:57:18.340630       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0916 23:57:19.016197       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0916 23:57:19.025253       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 23:57:19.032951       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0916 23:57:23.344022       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0916 23:57:24.194840       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.200277       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.242655       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0916 23:58:29.048843       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:34.361323       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:36.632983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:02.667929       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:58.976838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:19.218755       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:15.644338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:43.338268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:03:18.851078       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58262: use of closed network connection
	E0917 00:03:19.024113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58282: use of closed network connection
	E0917 00:03:19.194951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58306: use of closed network connection
	E0917 00:03:19.388722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58332: use of closed network connection
	E0917 00:03:19.557698       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58342: use of closed network connection
	E0917 00:03:19.744687       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58348: use of closed network connection
	E0917 00:03:19.919836       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58362: use of closed network connection
	E0917 00:03:20.087518       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58376: use of closed network connection
	E0917 00:03:20.254024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58398: use of closed network connection
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.036759       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.036813       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5897933c-61bc-4eef-8922-66c37ba68c57(kube-system/kindnet-rwc59) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	E0916 23:58:30.036834       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	I0916 23:58:30.038109       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.048424       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:30.048665       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4edbf3a1-360c-4f5c-81a3-aa63deb9a159(kube-system/kindnet-lpn5v) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	
	
	==> kubelet <==
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349086    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51d39f-7e43-461b-a021-13ddf0cb9845-lib-modules\") pod \"kindnet-h28vp\" (UID: \"6c51d39f-7e43-461b-a021-13ddf0cb9845\") " pod="kube-system/kindnet-h28vp"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349103    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-xtables-lock\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349123    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n49\" (UniqueName: \"kubernetes.io/projected/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-kube-api-access-84n49\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650251    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-config-volume\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650425    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5ns\" (UniqueName: \"kubernetes.io/projected/c918625f-be11-44bf-8b82-d4c21b8993d1-kube-api-access-th5ns\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650660    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c918625f-be11-44bf-8b82-d4c21b8993d1-config-volume\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650701    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmb4\" (UniqueName: \"kubernetes.io/projected/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-kube-api-access-xhmb4\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.014693    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tkhn" podStartSLOduration=1.014665687 podStartE2EDuration="1.014665687s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:24.932304069 +0000 UTC m=+6.176281069" watchObservedRunningTime="2025-09-16 23:57:25.014665687 +0000 UTC m=+6.258642688"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.042478    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.046332    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f541f878be89694936d8219d8e7fc682a8a169d9edf6417f067927aa4748c0ae"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153403    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrvp\" (UniqueName: \"kubernetes.io/projected/6b6f64f3-2647-4e13-be41-47fcc6111f3e-kube-api-access-jqrvp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153458    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b6f64f3-2647-4e13-be41-47fcc6111f3e-tmp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098005    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5wx4k" podStartSLOduration=2.097979793 podStartE2EDuration="2.097979793s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.086842117 +0000 UTC m=+7.330819118" watchObservedRunningTime="2025-09-16 23:57:26.097979793 +0000 UTC m=+7.341956793"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098130    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098124108 podStartE2EDuration="1.098124108s" podCreationTimestamp="2025-09-16 23:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.097817254 +0000 UTC m=+7.341794256" watchObservedRunningTime="2025-09-16 23:57:26.098124108 +0000 UTC m=+7.342101108"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.159968    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mjbz6" podStartSLOduration=5.159946005 podStartE2EDuration="5.159946005s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.124330373 +0000 UTC m=+7.368307374" watchObservedRunningTime="2025-09-16 23:57:29.159946005 +0000 UTC m=+10.403923006"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.193262    2468 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.194144    2468 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 23:57:30 ha-198834 kubelet[2468]: I0916 23:57:30.158085    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h28vp" podStartSLOduration=1.342825895 podStartE2EDuration="6.158061718s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="2025-09-16 23:57:24.955662014 +0000 UTC m=+6.199639012" lastFinishedPulling="2025-09-16 23:57:29.770897851 +0000 UTC m=+11.014874835" observedRunningTime="2025-09-16 23:57:30.157595407 +0000 UTC m=+11.401572408" watchObservedRunningTime="2025-09-16 23:57:30.158061718 +0000 UTC m=+11.402038720"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.230434    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.258365    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370599    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370662    2468 scope.go:117] "RemoveContainer" containerID="fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.388953    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.389033    2468 scope.go:117] "RemoveContainer" containerID="64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea"
	Sep 17 00:01:35 ha-198834 kubelet[2468]: I0917 00:01:35.703764    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5r6\" (UniqueName: \"kubernetes.io/projected/a7cf1231-2a12-4247-a01a-2c2f02f5f2d8-kube-api-access-vt5r6\") pod \"busybox-7b57f96db7-pstjp\" (UID: \"a7cf1231-2a12-4247-a01a-2c2f02f5f2d8\") " pod="default/busybox-7b57f96db7-pstjp"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (106.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:214: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:57:02.530585618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6698b0ad85a9078b37114c4e66646c6dc7a67a706d28557d80b29fea1d15d512",
	            "SandboxKey": "/var/run/docker/netns/6698b0ad85a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:eb:f5:3a:ee:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "669cb4f772890bad35a4ad4cdb1934f42912d7e03fc353fd08c3e3a046cfba54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.039312863s)
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.io                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.io                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.io                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default.svc.cluster.local                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- sh -c ping -c 1 192.168.49.1                                        │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:58.042095  722351 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:58.042245  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042257  722351 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:58.042263  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042455  722351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:58.043028  722351 out.go:368] Setting JSON to false
	I0916 23:56:58.043951  722351 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9550,"bootTime":1758057468,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:58.044043  722351 start.go:140] virtualization: kvm guest
	I0916 23:56:58.045935  722351 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:58.047229  722351 notify.go:220] Checking for updates...
	I0916 23:56:58.047241  722351 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:58.048693  722351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:58.049858  722351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:58.051172  722351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:58.052335  722351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:58.053390  722351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:58.054603  722351 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:58.077260  722351 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:58.077444  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.132853  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.122248025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.132998  722351 docker.go:318] overlay module found
	I0916 23:56:58.135611  722351 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:58.136750  722351 start.go:304] selected driver: docker
	I0916 23:56:58.136770  722351 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:58.136782  722351 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:58.137364  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.190249  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.179811473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.190455  722351 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:58.190736  722351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:58.192641  722351 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:58.193978  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:56:58.194069  722351 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:58.194094  722351 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:58.194188  722351 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:58.195605  722351 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0916 23:56:58.196688  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:56:58.197669  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:58.198952  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.199018  722351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0916 23:56:58.199034  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:58.199064  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:58.199149  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:58.199167  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:56:58.199618  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:56:58.199650  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json: {Name:mkfd30616e0167206552e80675557cfcc4fee172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:58.218451  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:58.218470  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:58.218487  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:58.218525  722351 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:58.218643  722351 start.go:364] duration metric: took 94.227µs to acquireMachinesLock for "ha-198834"
	I0916 23:56:58.218683  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:56:58.218779  722351 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:58.220943  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:58.221292  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:56:58.221335  722351 client.go:168] LocalClient.Create starting
	I0916 23:56:58.221405  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:56:58.221441  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221461  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221543  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:56:58.221570  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221588  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221956  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:58.238665  722351 cli_runner.go:211] docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:58.238743  722351 network_create.go:284] running [docker network inspect ha-198834] to gather additional debugging logs...
	I0916 23:56:58.238769  722351 cli_runner.go:164] Run: docker network inspect ha-198834
	W0916 23:56:58.254999  722351 cli_runner.go:211] docker network inspect ha-198834 returned with exit code 1
	I0916 23:56:58.255086  722351 network_create.go:287] error running [docker network inspect ha-198834]: docker network inspect ha-198834: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834 not found
	I0916 23:56:58.255122  722351 network_create.go:289] output of [docker network inspect ha-198834]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834 not found
	
	** /stderr **
	I0916 23:56:58.255285  722351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:58.272422  722351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b56820}
	I0916 23:56:58.272473  722351 network_create.go:124] attempt to create docker network ha-198834 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:58.272524  722351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-198834 ha-198834
	I0916 23:56:58.332062  722351 network_create.go:108] docker network ha-198834 192.168.49.0/24 created
	I0916 23:56:58.332109  722351 kic.go:121] calculated static IP "192.168.49.2" for the "ha-198834" container
	I0916 23:56:58.332180  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:58.347722  722351 cli_runner.go:164] Run: docker volume create ha-198834 --label name.minikube.sigs.k8s.io=ha-198834 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:58.365722  722351 oci.go:103] Successfully created a docker volume ha-198834
	I0916 23:56:58.365811  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --entrypoint /usr/bin/test -v ha-198834:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:58.752716  722351 oci.go:107] Successfully prepared a docker volume ha-198834
	I0916 23:56:58.752766  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.752791  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:58.752860  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:02.431811  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.678879308s)
	I0916 23:57:02.431852  722351 kic.go:203] duration metric: took 3.679056906s to extract preloaded images to volume ...
	W0916 23:57:02.431981  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:02.432030  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:02.432094  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:02.483868  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834 --name ha-198834 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834 --network ha-198834 --ip 192.168.49.2 --volume ha-198834:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:02.749244  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Running}}
	I0916 23:57:02.769059  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:02.787342  722351 cli_runner.go:164] Run: docker exec ha-198834 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:02.836161  722351 oci.go:144] the created container "ha-198834" has a running status.
	I0916 23:57:02.836195  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa...
	I0916 23:57:03.023198  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:03.023332  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:03.051071  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.071057  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:03.071081  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:03.121506  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.138447  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:03.138553  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.156407  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.156657  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.156674  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:03.295893  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.295938  722351 ubuntu.go:182] provisioning hostname "ha-198834"
	I0916 23:57:03.296023  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.314748  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.314993  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.315008  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0916 23:57:03.463642  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.463716  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.480946  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.481224  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.481264  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:03.616528  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:03.616561  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:03.616587  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:03.616603  722351 provision.go:84] configureAuth start
	I0916 23:57:03.616666  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:03.633505  722351 provision.go:143] copyHostCerts
	I0916 23:57:03.633553  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633590  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:03.633601  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633689  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:03.633796  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633824  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:03.633834  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633870  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:03.633969  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.633996  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:03.634007  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.634050  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:03.634188  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0916 23:57:03.786555  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:03.786617  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:03.786691  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.804115  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:03.900955  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:03.901014  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:57:03.928655  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:03.928721  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:03.953468  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:03.953537  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:03.978330  722351 provision.go:87] duration metric: took 361.708211ms to configureAuth
	I0916 23:57:03.978356  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:03.978536  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:03.978599  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.995700  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.995934  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.995954  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:04.131514  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:04.131541  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:04.131675  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:04.131752  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.148752  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.148996  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.149060  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:04.298185  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:04.298270  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.315091  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.315309  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.315326  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:05.420254  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:04.295122578 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:05.420296  722351 machine.go:96] duration metric: took 2.281822221s to provisionDockerMachine
	I0916 23:57:05.420315  722351 client.go:171] duration metric: took 7.198967751s to LocalClient.Create
	I0916 23:57:05.420340  722351 start.go:167] duration metric: took 7.199048943s to libmachine.API.Create "ha-198834"
	I0916 23:57:05.420350  722351 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0916 23:57:05.420364  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:05.420443  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:05.420495  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.437726  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.536164  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:05.539580  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:05.539616  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:05.539633  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:05.539639  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:05.539653  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:05.539713  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:05.539819  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:05.539836  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:05.540001  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:05.548691  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:05.575226  722351 start.go:296] duration metric: took 154.859714ms for postStartSetup
	I0916 23:57:05.575586  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.591876  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:05.592351  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:05.592412  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.609076  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.701881  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:05.706378  722351 start.go:128] duration metric: took 7.487581015s to createHost
	I0916 23:57:05.706400  722351 start.go:83] releasing machines lock for "ha-198834", held for 7.487744986s
	I0916 23:57:05.706457  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.723047  722351 ssh_runner.go:195] Run: cat /version.json
	I0916 23:57:05.723106  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.723117  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:05.723202  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.739830  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.739978  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.900291  722351 ssh_runner.go:195] Run: systemctl --version
	I0916 23:57:05.905029  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:05.909440  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:05.939050  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:05.939153  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:05.968631  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:05.968659  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:05.968693  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:05.968830  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:05.985490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:05.997349  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:06.007949  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:06.008036  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:06.018490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.028804  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:06.039330  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.049816  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:06.059493  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:06.069825  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:06.080461  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:06.091039  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:06.100019  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:06.109126  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.178675  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:06.251706  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:06.251760  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:06.251809  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:06.264383  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.275792  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:06.294666  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.306227  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:06.317564  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:06.334759  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:06.338327  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:06.348543  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:06.366680  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:06.432452  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:06.496386  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:06.496496  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:06.515617  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:06.527317  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.590441  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:07.360810  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:07.372759  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:07.384493  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.396808  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:07.466973  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:07.538629  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.607976  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:07.630119  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:07.642121  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.709050  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:07.784177  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.797686  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:07.797763  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:07.801576  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:07.801630  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:07.804977  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:07.837851  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:07.837957  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.862098  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.888678  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:07.888755  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:07.905526  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:07.909605  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:07.921677  722351 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:57:07.921793  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:07.921842  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.943020  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.943041  722351 docker.go:621] Images already preloaded, skipping extraction
	I0916 23:57:07.943097  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.963583  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.963609  722351 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:57:07.963623  722351 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0916 23:57:07.963750  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:07.963822  722351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 23:57:08.012977  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:08.013007  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:08.013021  722351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:57:08.013044  722351 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:57:08.013180  722351 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:57:08.013203  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:08.013244  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:08.026529  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:08.026652  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:08.026716  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:08.036301  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:08.036379  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:57:08.046128  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 23:57:08.064738  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:08.083216  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:57:08.101114  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:57:08.121332  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:08.125035  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:08.136734  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:08.207460  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:08.231438  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0916 23:57:08.231468  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:08.231491  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.231634  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:08.231682  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:08.231692  722351 certs.go:256] generating profile certs ...
	I0916 23:57:08.231748  722351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:08.231761  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt with IP's: []
	I0916 23:57:08.595971  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt ...
	I0916 23:57:08.596008  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt: {Name:mk045c8005e18afdd173496398fb640e85421530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596237  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key ...
	I0916 23:57:08.596255  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key: {Name:mkec7f349d5172bad8ab50dce27926cf4a2810b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596372  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28
	I0916 23:57:08.596390  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:57:08.930707  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 ...
	I0916 23:57:08.930740  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28: {Name:mke8743bf1c0faa0b20cb0336c0e1879fcb77e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.930956  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 ...
	I0916 23:57:08.930975  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28: {Name:mkd63d446f2fe51bc154cd1e5df7f39c484f911b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.931094  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:08.931221  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:08.931283  722351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:08.931298  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt with IP's: []
	I0916 23:57:09.286083  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt ...
	I0916 23:57:09.286118  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt: {Name:mk7d8f9e6931aff0b35e5110e6bb582a3f00c824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286322  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key ...
	I0916 23:57:09.286339  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key: {Name:mkaeef389ff7f9a0b6729cce56a45b0b3aa13296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286448  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:09.286467  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:09.286479  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:09.286489  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:09.286513  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:09.286527  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:09.286538  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:09.286550  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:09.286602  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:09.286641  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:09.286650  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:09.286674  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:09.286702  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:09.286730  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:09.286767  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:09.286792  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.286805  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.286817  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.287381  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:09.312982  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:09.337940  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:09.362347  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:09.386557  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:57:09.412140  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:09.436893  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:09.461871  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:09.487876  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:09.516060  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:09.541440  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:09.567069  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:57:09.585649  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:09.591504  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:09.602004  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605727  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605791  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.612679  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:09.622556  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:09.632414  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636379  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636441  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.643659  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:09.653893  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:09.663837  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667554  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667899  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.675833  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:09.686032  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:09.689851  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:09.689923  722351 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:09.690062  722351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 23:57:09.708774  722351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:57:09.718368  722351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:57:09.727825  722351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:57:09.727888  722351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:57:09.738106  722351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:57:09.738126  722351 kubeadm.go:157] found existing configuration files:
	
	I0916 23:57:09.738165  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:57:09.747962  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:57:09.748017  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:57:09.757385  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:57:09.766772  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:57:09.766839  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:57:09.775735  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.784848  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:57:09.784955  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.793751  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:57:09.803170  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:57:09.803229  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:57:09.811944  722351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:57:09.867145  722351 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:57:09.919246  722351 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:57:19.614241  722351 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:57:19.614308  722351 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:57:19.614466  722351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:57:19.614561  722351 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:57:19.614607  722351 kubeadm.go:310] OS: Linux
	I0916 23:57:19.614692  722351 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:57:19.614771  722351 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:57:19.614837  722351 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:57:19.614899  722351 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:57:19.614977  722351 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:57:19.615057  722351 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:57:19.615125  722351 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:57:19.615202  722351 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:57:19.615307  722351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:57:19.615454  722351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:57:19.615594  722351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:57:19.615688  722351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:57:19.618162  722351 out.go:252]   - Generating certificates and keys ...
	I0916 23:57:19.618260  722351 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:57:19.618349  722351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:57:19.618445  722351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:57:19.618533  722351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:57:19.618635  722351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:57:19.618717  722351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:57:19.618792  722351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:57:19.618993  722351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619071  722351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:57:19.619249  722351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619335  722351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:57:19.619434  722351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:57:19.619517  722351 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:57:19.619599  722351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:57:19.619679  722351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:57:19.619763  722351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:57:19.619846  722351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:57:19.619990  722351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:57:19.620069  722351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:57:19.620183  722351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:57:19.620281  722351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:57:19.621487  722351 out.go:252]   - Booting up control plane ...
	I0916 23:57:19.621595  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:57:19.621704  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:57:19.621799  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:57:19.621956  722351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:57:19.622047  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:57:19.622137  722351 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:57:19.622213  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:57:19.622246  722351 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:57:19.622371  722351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:57:19.622503  722351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:57:19.622564  722351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000941296s
	I0916 23:57:19.622663  722351 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:57:19.622778  722351 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:57:19.622893  722351 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:57:19.623021  722351 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:57:19.623126  722351 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.545161134s
	I0916 23:57:19.623210  722351 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.1638517s
	I0916 23:57:19.623273  722351 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001738286s
	I0916 23:57:19.623369  722351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:57:19.623478  722351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:57:19.623551  722351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:57:19.623792  722351 kubeadm.go:310] [mark-control-plane] Marking the node ha-198834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:57:19.623845  722351 kubeadm.go:310] [bootstrap-token] Using token: wg2on6.splp3qzu9xv61vdp
	I0916 23:57:19.625599  722351 out.go:252]   - Configuring RBAC rules ...
	I0916 23:57:19.625697  722351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:57:19.625769  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:57:19.625966  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:57:19.626123  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:57:19.626261  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:57:19.626367  722351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:57:19.626473  722351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:57:19.626522  722351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:57:19.626564  722351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:57:19.626570  722351 kubeadm.go:310] 
	I0916 23:57:19.626631  722351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:57:19.626643  722351 kubeadm.go:310] 
	I0916 23:57:19.626737  722351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:57:19.626747  722351 kubeadm.go:310] 
	I0916 23:57:19.626781  722351 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:57:19.626863  722351 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:57:19.626960  722351 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:57:19.626973  722351 kubeadm.go:310] 
	I0916 23:57:19.627050  722351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:57:19.627058  722351 kubeadm.go:310] 
	I0916 23:57:19.627113  722351 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:57:19.627119  722351 kubeadm.go:310] 
	I0916 23:57:19.627167  722351 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:57:19.627238  722351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:57:19.627297  722351 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:57:19.627302  722351 kubeadm.go:310] 
	I0916 23:57:19.627381  722351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:57:19.627449  722351 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:57:19.627454  722351 kubeadm.go:310] 
	I0916 23:57:19.627525  722351 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627618  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0916 23:57:19.627647  722351 kubeadm.go:310] 	--control-plane 
	I0916 23:57:19.627653  722351 kubeadm.go:310] 
	I0916 23:57:19.627725  722351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:57:19.627733  722351 kubeadm.go:310] 
	I0916 23:57:19.627801  722351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627921  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0916 23:57:19.627933  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:19.627939  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:19.630017  722351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:57:19.631017  722351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:57:19.635194  722351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:57:19.635211  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:57:19.655634  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:57:19.855102  722351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:57:19.855186  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:19.855265  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834 minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=true
	I0916 23:57:19.863538  722351 ops.go:34] apiserver oom_adj: -16
	I0916 23:57:19.931275  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.432025  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.932100  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.432105  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.932376  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.432213  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.931583  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.431392  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.932193  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.431927  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.504799  722351 kubeadm.go:1105] duration metric: took 4.649687278s to wait for elevateKubeSystemPrivileges
	I0916 23:57:24.504835  722351 kubeadm.go:394] duration metric: took 14.81493092s to StartCluster
	I0916 23:57:24.504858  722351 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.504967  722351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:57:24.505808  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.506080  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:57:24.506079  722351 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:24.506102  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.506120  722351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:57:24.506215  722351 addons.go:69] Setting storage-provisioner=true in profile "ha-198834"
	I0916 23:57:24.506241  722351 addons.go:238] Setting addon storage-provisioner=true in "ha-198834"
	I0916 23:57:24.506236  722351 addons.go:69] Setting default-storageclass=true in profile "ha-198834"
	I0916 23:57:24.506263  722351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198834"
	I0916 23:57:24.506271  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.506311  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:24.506630  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.506797  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.527476  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:24.528010  722351 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:57:24.528028  722351 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:57:24.528032  722351 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:57:24.528036  722351 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:57:24.528039  722351 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:57:24.528105  722351 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:57:24.528384  722351 addons.go:238] Setting addon default-storageclass=true in "ha-198834"
	I0916 23:57:24.528420  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.528683  722351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:57:24.528891  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.530050  722351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.530067  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:57:24.530109  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.548463  722351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.548490  722351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:57:24.548552  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.551711  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.575963  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.622716  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:57:24.680948  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.725959  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.815565  722351 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:57:25.027949  722351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:57:25.029176  722351 addons.go:514] duration metric: took 523.059617ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:57:25.029216  722351 start.go:246] waiting for cluster config update ...
	I0916 23:57:25.029233  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:25.030834  722351 out.go:203] 
	I0916 23:57:25.032180  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:25.032246  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.033846  722351 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0916 23:57:25.035651  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:25.036699  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:25.038502  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.038524  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:25.038599  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:25.038624  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:25.038635  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:25.038696  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.064556  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:25.064575  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:25.064593  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:25.064625  722351 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:25.064737  722351 start.go:364] duration metric: took 87.928µs to acquireMachinesLock for "ha-198834-m02"
	I0916 23:57:25.064767  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:25.064852  722351 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:57:25.067030  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:25.067261  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:25.067302  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:25.067392  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:25.067435  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067451  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067520  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:25.067544  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067561  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067817  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:25.087287  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0008ae780 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:25.087329  722351 kic.go:121] calculated static IP "192.168.49.3" for the "ha-198834-m02" container
	I0916 23:57:25.087390  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:25.104356  722351 cli_runner.go:164] Run: docker volume create ha-198834-m02 --label name.minikube.sigs.k8s.io=ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:25.128318  722351 oci.go:103] Successfully created a docker volume ha-198834-m02
	I0916 23:57:25.128423  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --entrypoint /usr/bin/test -v ha-198834-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:25.555443  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m02
	I0916 23:57:25.555486  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.555507  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:25.555574  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.769985  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214340138s)
	I0916 23:57:29.770025  722351 kic.go:203] duration metric: took 4.214511914s to extract preloaded images to volume ...
	W0916 23:57:29.770138  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.770180  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.770230  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.831280  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m02 --name ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m02 --network ha-198834 --ip 192.168.49.3 --volume ha-198834-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:30.118263  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Running}}
	I0916 23:57:30.140753  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.161053  722351 cli_runner.go:164] Run: docker exec ha-198834-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:30.204746  722351 oci.go:144] the created container "ha-198834-m02" has a running status.
	I0916 23:57:30.204782  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa...
	I0916 23:57:30.491277  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:30.491341  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:30.523169  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.546155  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:30.546178  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.603616  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.624695  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.624784  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.648569  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.648946  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.648966  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.800750  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.800784  722351 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0916 23:57:30.800873  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.822237  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.822505  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.822519  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0916 23:57:30.984206  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.984307  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.007082  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.007398  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.007430  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:31.152561  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:31.152598  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:31.152624  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:31.152644  722351 provision.go:84] configureAuth start
	I0916 23:57:31.152709  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:31.171931  722351 provision.go:143] copyHostCerts
	I0916 23:57:31.171978  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172008  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:31.172014  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172081  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:31.172159  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172181  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:31.172185  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172216  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:31.172262  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172279  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:31.172287  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172310  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:31.172361  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0916 23:57:31.314068  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:31.314146  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:31.314208  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.336792  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:31.442195  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:31.442269  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:31.472780  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:31.472841  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:31.499569  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:31.499653  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:31.530277  722351 provision.go:87] duration metric: took 377.61476ms to configureAuth
	I0916 23:57:31.530311  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:31.530528  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:31.530587  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.548573  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.548821  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.548841  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:31.695327  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:31.695357  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:31.695559  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:31.695639  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.715926  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.716269  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.716384  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:31.879960  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:31.880054  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.901465  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.901783  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.901817  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:33.107385  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:31.877658246 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:33.107432  722351 machine.go:96] duration metric: took 2.482713737s to provisionDockerMachine
	I0916 23:57:33.107448  722351 client.go:171] duration metric: took 8.040135103s to LocalClient.Create
	I0916 23:57:33.107471  722351 start.go:167] duration metric: took 8.040214449s to libmachine.API.Create "ha-198834"
	I0916 23:57:33.107480  722351 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0916 23:57:33.107493  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:33.107570  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:33.107624  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.129478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.235200  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:33.239799  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:33.239842  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:33.239854  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:33.239862  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:33.239881  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:33.239961  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:33.240070  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:33.240085  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:33.240211  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:33.252619  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:33.291135  722351 start.go:296] duration metric: took 183.636707ms for postStartSetup
	I0916 23:57:33.291600  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.313645  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:33.314041  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:33.314103  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.337314  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.439716  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:33.445408  722351 start.go:128] duration metric: took 8.380530846s to createHost
	I0916 23:57:33.445437  722351 start.go:83] releasing machines lock for "ha-198834-m02", held for 8.380681461s
	I0916 23:57:33.445500  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.469661  722351 out.go:179] * Found network options:
	I0916 23:57:33.471226  722351 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:33.472373  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:33.472429  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:33.472520  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:33.472550  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:33.472570  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.472621  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.495822  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.496478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.601441  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:33.704002  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:33.704085  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:33.742848  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:33.742881  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:33.742929  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:33.743066  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:33.765394  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:33.781702  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:33.796106  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:33.796186  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:33.811490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.825594  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:33.840006  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.853819  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:33.867424  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:33.882022  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:33.896562  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:33.910813  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:33.923436  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:33.936892  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.033978  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:34.137820  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:34.137955  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:34.138026  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:34.154788  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.170769  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:34.190397  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.207526  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:34.224333  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:34.249827  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:34.255532  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:34.270253  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:34.296311  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:34.391517  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:34.486390  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:34.486452  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:34.512957  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:34.529696  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.623612  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:35.389236  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:35.402665  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:35.418828  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.433733  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:35.524509  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:35.615815  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.688879  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:35.713552  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:35.729264  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.818355  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:35.908063  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.921416  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:35.921483  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:35.925600  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:35.925666  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:35.929510  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:35.970926  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:35.971002  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.001052  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.032731  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:36.033881  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:36.035387  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:36.055948  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:36.061767  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:36.076229  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:57:36.076482  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:36.076794  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:36.099199  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:36.099483  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0916 23:57:36.099498  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:36.099514  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.099667  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:36.099721  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:36.099735  722351 certs.go:256] generating profile certs ...
	I0916 23:57:36.099834  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:36.099867  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0916 23:57:36.099889  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:36.171638  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 ...
	I0916 23:57:36.171669  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4: {Name:mk274e4893d598b40c8fed777bc1c7c2e951159a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.171866  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 ...
	I0916 23:57:36.171885  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4: {Name:mkf2a66869f0c345fb28cc9925dc0bb02623a928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.172011  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:36.172195  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:36.172362  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:36.172381  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:36.172396  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:36.172415  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:36.172438  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:36.172457  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:36.172474  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:36.172493  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:36.172512  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:36.172589  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:36.172634  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:36.172648  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:36.172679  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:36.172703  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:36.172736  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:36.172796  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:36.172840  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.172861  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.172878  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.172963  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:36.194873  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:36.286293  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:36.291948  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:36.308150  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:36.312206  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:36.325598  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:36.329618  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:36.346110  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:36.350017  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:36.365628  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:36.369445  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:36.383675  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:36.387388  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:57:36.403394  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:36.432068  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:36.461592  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:36.491261  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:36.523895  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:36.552719  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:36.580284  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:36.608342  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:36.639670  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:36.672003  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:36.703856  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:36.734275  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:36.755638  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:36.777805  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:36.799338  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:36.821463  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:36.843600  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:57:36.867808  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:36.889233  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:36.896091  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:36.908363  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913145  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913212  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.921857  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:36.934186  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:36.945282  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949180  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949249  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.958068  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:36.970160  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:36.981053  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985350  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985410  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.993828  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:37.004616  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:37.008764  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:37.008830  722351 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0916 23:57:37.008961  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:37.008998  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:37.009050  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:37.026582  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:37.026656  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:37.026738  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:37.036867  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:37.036974  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:37.046606  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:57:37.070259  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:37.092325  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:37.116853  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:37.120789  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:37.137396  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:37.223494  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:37.256254  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:37.256574  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:37.256705  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:37.256762  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:37.278264  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:37.435308  722351 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:37.435366  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:54.013635  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.578241326s)
	I0916 23:57:54.013701  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:54.233708  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:57:54.308006  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:54.383356  722351 start.go:319] duration metric: took 17.126777498s to joinCluster
	I0916 23:57:54.383433  722351 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:54.383691  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:54.385020  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:54.386187  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:54.491315  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:54.505328  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:54.505398  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:54.505659  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508947  722351 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0916 23:57:56.508979  722351 node_ready.go:38] duration metric: took 2.003299323s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508998  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:56.509065  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:56.521258  722351 api_server.go:72] duration metric: took 2.137779117s to wait for apiserver process to appear ...
	I0916 23:57:56.521298  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:56.521326  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:56.527086  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:56.528055  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:56.528078  722351 api_server.go:131] duration metric: took 6.77168ms to wait for apiserver health ...
	I0916 23:57:56.528087  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:56.534412  722351 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:56.534478  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.534486  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.534497  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.534503  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.534515  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534524  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.534535  722351 system_pods.go:61] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534541  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.534547  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.534559  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.534564  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.534667  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.534716  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534725  722351 system_pods.go:61] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534731  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.534743  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.534748  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.534753  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.534758  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.534765  722351 system_pods.go:74] duration metric: took 6.672375ms to wait for pod list to return data ...
	I0916 23:57:56.534774  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:56.538351  722351 default_sa.go:45] found service account: "default"
	I0916 23:57:56.538385  722351 default_sa.go:55] duration metric: took 3.603096ms for default service account to be created ...
	I0916 23:57:56.538399  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:56.542274  722351 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:56.542301  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.542307  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.542311  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.542314  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.542321  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542325  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.542330  722351 system_pods.go:89] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542334  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.542338  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.542344  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.542347  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.542351  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.542356  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542367  722351 system_pods.go:89] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542371  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.542375  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.542377  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.542380  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.542384  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.542393  722351 system_pods.go:126] duration metric: took 3.988364ms to wait for k8s-apps to be running ...
	I0916 23:57:56.542403  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:56.542447  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:56.554466  722351 system_svc.go:56] duration metric: took 12.054188ms WaitForService to wait for kubelet
	I0916 23:57:56.554496  722351 kubeadm.go:578] duration metric: took 2.171026353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:56.554519  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:56.557501  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557532  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557552  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557557  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557561  722351 node_conditions.go:105] duration metric: took 3.037317ms to run NodePressure ...
	I0916 23:57:56.557575  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:56.557610  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:56.559549  722351 out.go:203] 
	I0916 23:57:56.561097  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:56.561232  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.562855  722351 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0916 23:57:56.563951  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:56.565051  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:56.566271  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:56.566290  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:56.566373  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:56.566383  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:56.566485  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:56.566581  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.586635  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:56.586656  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:56.586673  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:56.586704  722351 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:56.586811  722351 start.go:364] duration metric: took 87.391µs to acquireMachinesLock for "ha-198834-m03"
	I0916 23:57:56.586843  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:56.587003  722351 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:56.589063  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:56.589158  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:56.589187  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:56.589263  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:56.589299  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589313  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589365  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:56.589385  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589398  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589634  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:56.607248  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc001595440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:56.607297  722351 kic.go:121] calculated static IP "192.168.49.4" for the "ha-198834-m03" container
	I0916 23:57:56.607371  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:56.624198  722351 cli_runner.go:164] Run: docker volume create ha-198834-m03 --label name.minikube.sigs.k8s.io=ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:56.642183  722351 oci.go:103] Successfully created a docker volume ha-198834-m03
	I0916 23:57:56.642258  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --entrypoint /usr/bin/test -v ha-198834-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:57.021785  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m03
	I0916 23:57:57.021834  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:57.021864  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:57.021952  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:59.672995  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.650992477s)
	I0916 23:57:59.673039  722351 kic.go:203] duration metric: took 2.651177157s to extract preloaded images to volume ...
	W0916 23:57:59.673144  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:59.673190  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:59.673255  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:59.730169  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m03 --name ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m03 --network ha-198834 --ip 192.168.49.4 --volume ha-198834-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:58:00.013728  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Running}}
	I0916 23:58:00.034076  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.054832  722351 cli_runner.go:164] Run: docker exec ha-198834-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:58:00.109517  722351 oci.go:144] the created container "ha-198834-m03" has a running status.
	I0916 23:58:00.109546  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa...
	I0916 23:58:00.621029  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:58:00.621097  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:58:00.651614  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.673435  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:58:00.673460  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:58:00.730412  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.749865  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:58:00.750006  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.771445  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.771738  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.771754  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:58:00.920523  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:00.920553  722351 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0916 23:58:00.920616  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.940561  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.940837  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.940853  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0916 23:58:01.103101  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:01.103204  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:01.125182  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:01.125511  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:01.125543  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:58:01.275155  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:01.275201  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:58:01.275231  722351 ubuntu.go:190] setting up certificates
	I0916 23:58:01.275246  722351 provision.go:84] configureAuth start
	I0916 23:58:01.275318  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:01.296305  722351 provision.go:143] copyHostCerts
	I0916 23:58:01.296378  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296426  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:58:01.296439  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296527  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:58:01.296632  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296656  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:58:01.296682  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296726  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:58:01.296788  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296825  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:58:01.296835  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296924  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:58:01.297040  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0916 23:58:02.100987  722351 provision.go:177] copyRemoteCerts
	I0916 23:58:02.101048  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:58:02.101084  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.119475  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:02.218802  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:58:02.218870  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:58:02.251628  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:58:02.251700  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:58:02.279052  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:58:02.279124  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:58:02.305168  722351 provision.go:87] duration metric: took 1.029902032s to configureAuth
	I0916 23:58:02.305208  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:58:02.305440  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:02.305491  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.322139  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.322413  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.322428  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:58:02.459594  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:58:02.459629  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:58:02.459746  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:58:02.459804  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.476657  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.476985  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.477099  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:58:02.633394  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:58:02.633489  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.651145  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.651390  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.651410  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:58:03.800032  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:58:02.631485455 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:58:03.800077  722351 machine.go:96] duration metric: took 3.050188223s to provisionDockerMachine
	I0916 23:58:03.800094  722351 client.go:171] duration metric: took 7.210891992s to LocalClient.Create
	I0916 23:58:03.800121  722351 start.go:167] duration metric: took 7.210962522s to libmachine.API.Create "ha-198834"
	I0916 23:58:03.800131  722351 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0916 23:58:03.800155  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:58:03.800229  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:58:03.800295  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.817949  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:03.918038  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:58:03.922382  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:58:03.922420  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:58:03.922430  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:58:03.922438  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:58:03.922452  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:58:03.922512  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:58:03.922607  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:58:03.922620  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:58:03.922727  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:58:03.932298  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:03.961387  722351 start.go:296] duration metric: took 161.230642ms for postStartSetup
	I0916 23:58:03.961811  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:03.979123  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:58:03.979395  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:58:03.979437  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.997520  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.091253  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:58:04.096537  722351 start.go:128] duration metric: took 7.509514126s to createHost
	I0916 23:58:04.096585  722351 start.go:83] releasing machines lock for "ha-198834-m03", held for 7.509743952s
	I0916 23:58:04.096660  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:04.115702  722351 out.go:179] * Found network options:
	I0916 23:58:04.117029  722351 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:58:04.118232  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118256  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118281  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118299  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:58:04.118395  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:58:04.118441  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.118449  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:58:04.118515  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.136875  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.137594  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.231418  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:58:04.311016  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:58:04.311108  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:58:04.340810  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:58:04.340841  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.340871  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.340997  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.359059  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:58:04.371794  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:58:04.383345  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:58:04.383421  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:58:04.394513  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.405081  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:58:04.415653  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.426510  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:58:04.436405  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:58:04.447135  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:58:04.457926  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:58:04.469563  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:58:04.478599  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:58:04.488307  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:04.557785  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:58:04.636805  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.636855  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.636899  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:58:04.649865  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.662323  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:58:04.680711  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.693319  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:58:04.705665  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.723842  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:58:04.727547  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:58:04.738845  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:58:04.758974  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:58:04.830471  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:58:04.900429  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:58:04.900482  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:58:04.920093  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:58:04.931599  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:05.002855  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:58:05.807532  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:58:05.819728  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:58:05.832303  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:05.844347  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:58:05.916277  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:58:05.988520  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.055206  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:58:06.080490  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:58:06.092817  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.162707  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:58:06.248276  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:06.261931  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:58:06.262000  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:58:06.265868  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:58:06.265941  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:58:06.269385  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:58:06.305058  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:58:06.305139  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.331725  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.358446  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:58:06.359714  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:58:06.360964  722351 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:58:06.362187  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:58:06.379025  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:58:06.383173  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:06.394963  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:58:06.395208  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:06.395415  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:58:06.412700  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:06.412979  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0916 23:58:06.412992  722351 certs.go:194] generating shared ca certs ...
	I0916 23:58:06.413008  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:06.413150  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:58:06.413202  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:58:06.413213  722351 certs.go:256] generating profile certs ...
	I0916 23:58:06.413290  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:58:06.413316  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0916 23:58:06.413331  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:58:07.059616  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 ...
	I0916 23:58:07.059648  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783: {Name:mka6f3e20ae0db98330bce12c7c53c8ceb029f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.059850  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 ...
	I0916 23:58:07.059873  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783: {Name:mk88fba5116449476945068bb066a5fae095ca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.060019  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:58:07.060173  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:58:07.060303  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:58:07.060320  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:58:07.060332  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:58:07.060346  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:58:07.060359  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:58:07.060371  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:58:07.060383  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:58:07.060395  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:58:07.060407  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:58:07.060462  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:58:07.060492  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:58:07.060502  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:58:07.060525  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:58:07.060546  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:58:07.060571  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:58:07.060609  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:07.060634  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.060648  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.060666  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.060725  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:07.077675  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:07.167227  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:58:07.171339  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:58:07.184631  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:58:07.188345  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:58:07.201195  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:58:07.204727  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:58:07.217344  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:58:07.220977  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:58:07.233804  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:58:07.237296  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:58:07.250936  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:58:07.254504  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:58:07.267513  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:58:07.293250  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:58:07.319357  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:58:07.345045  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:58:07.370793  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:58:07.397411  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:58:07.422329  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:58:07.447186  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:58:07.472564  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:58:07.500373  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:58:07.526598  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:58:07.552426  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:58:07.570062  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:58:07.589628  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:58:07.609486  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:58:07.630629  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:58:07.650280  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:58:07.669308  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:58:07.687700  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:58:07.694681  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:58:07.705784  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709662  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709739  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.716649  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:58:07.726290  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:58:07.736118  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740041  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740101  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.747081  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:58:07.757480  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:58:07.767310  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771054  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771114  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.778013  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:58:07.788245  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:58:07.792058  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:58:07.792123  722351 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0916 23:58:07.792232  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:58:07.792263  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:58:07.792307  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:58:07.805180  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:58:07.805247  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:58:07.805296  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:58:07.814610  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:58:07.814678  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:58:07.825352  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:58:07.844047  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:58:07.862757  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:58:07.883848  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:58:07.887562  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:07.899646  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:07.974384  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:08.004718  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:08.005001  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:08.005124  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:58:08.005169  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:08.024622  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:08.169785  722351 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:08.169853  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:58:25.708852  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (17.538975369s)
	I0916 23:58:25.708884  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:58:25.930343  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m03 minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:58:26.006016  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:58:26.089408  722351 start.go:319] duration metric: took 18.084403561s to joinCluster
	I0916 23:58:26.089494  722351 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:26.089805  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:26.091004  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:58:26.092246  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:26.200675  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:26.214424  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:58:26.214506  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:58:26.214713  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	W0916 23:58:28.218137  722351 node_ready.go:57] node "ha-198834-m03" has "Ready":"False" status (will retry)
	I0916 23:58:29.718579  722351 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0916 23:58:29.718621  722351 node_ready.go:38] duration metric: took 3.503891029s for node "ha-198834-m03" to be "Ready" ...
	I0916 23:58:29.718640  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:58:29.718688  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:58:29.730821  722351 api_server.go:72] duration metric: took 3.641289304s to wait for apiserver process to appear ...
	I0916 23:58:29.730847  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:58:29.730870  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:58:29.736447  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:58:29.737363  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:58:29.737382  722351 api_server.go:131] duration metric: took 6.528439ms to wait for apiserver health ...
	I0916 23:58:29.737390  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:58:29.743125  722351 system_pods.go:59] 27 kube-system pods found
	I0916 23:58:29.743154  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.743159  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.743162  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.743166  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.743169  722351 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.743172  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.743179  722351 system_pods.go:61] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743182  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.743189  722351 system_pods.go:61] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743193  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.743198  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.743202  722351 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.743206  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.743209  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.743212  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.743216  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.743220  722351 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743227  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.743231  722351 system_pods.go:61] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743236  722351 system_pods.go:61] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743241  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.743245  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.743248  722351 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.743251  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.743254  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.743257  722351 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.743260  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.743267  722351 system_pods.go:74] duration metric: took 5.871633ms to wait for pod list to return data ...
	I0916 23:58:29.743275  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:58:29.746038  722351 default_sa.go:45] found service account: "default"
	I0916 23:58:29.746059  722351 default_sa.go:55] duration metric: took 2.77496ms for default service account to be created ...
	I0916 23:58:29.746067  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:58:29.751428  722351 system_pods.go:86] 27 kube-system pods found
	I0916 23:58:29.751454  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.751459  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.751463  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.751466  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.751469  722351 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.751472  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.751478  722351 system_pods.go:89] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751482  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.751490  722351 system_pods.go:89] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751494  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.751498  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.751501  722351 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.751504  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.751508  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.751512  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.751515  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.751520  722351 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751526  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.751530  722351 system_pods.go:89] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751535  722351 system_pods.go:89] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751540  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.751545  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.751550  722351 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.751554  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.751558  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.751563  722351 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.751569  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.751577  722351 system_pods.go:126] duration metric: took 5.505301ms to wait for k8s-apps to be running ...
	I0916 23:58:29.751587  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:58:29.751637  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:58:29.764067  722351 system_svc.go:56] duration metric: took 12.467532ms WaitForService to wait for kubelet
	I0916 23:58:29.764102  722351 kubeadm.go:578] duration metric: took 3.674577242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:58:29.764127  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:58:29.767676  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767699  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767712  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767717  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767721  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767724  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767728  722351 node_conditions.go:105] duration metric: took 3.595861ms to run NodePressure ...
	I0916 23:58:29.767739  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:58:29.767761  722351 start.go:255] writing updated cluster config ...
	I0916 23:58:29.768076  722351 ssh_runner.go:195] Run: rm -f paused
	I0916 23:58:29.772054  722351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:29.772528  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:58:29.776391  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781517  722351 pod_ready.go:94] pod "coredns-66bc5c9577-5wx4k" is "Ready"
	I0916 23:58:29.781544  722351 pod_ready.go:86] duration metric: took 5.128752ms for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781552  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.786524  722351 pod_ready.go:94] pod "coredns-66bc5c9577-mjbz6" is "Ready"
	I0916 23:58:29.786549  722351 pod_ready.go:86] duration metric: took 4.991527ms for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.789148  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793593  722351 pod_ready.go:94] pod "etcd-ha-198834" is "Ready"
	I0916 23:58:29.793614  722351 pod_ready.go:86] duration metric: took 4.43654ms for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793622  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797833  722351 pod_ready.go:94] pod "etcd-ha-198834-m02" is "Ready"
	I0916 23:58:29.797856  722351 pod_ready.go:86] duration metric: took 4.228462ms for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797864  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.974055  722351 request.go:683] "Waited before sending request" delay="176.0853ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.173047  722351 request.go:683] "Waited before sending request" delay="193.205885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.373324  722351 request.go:683] "Waited before sending request" delay="74.260595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.573189  722351 request.go:683] "Waited before sending request" delay="196.187075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.973960  722351 request.go:683] "Waited before sending request" delay="171.749825ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.977519  722351 pod_ready.go:94] pod "etcd-ha-198834-m03" is "Ready"
	I0916 23:58:30.977548  722351 pod_ready.go:86] duration metric: took 1.179678858s for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.172996  722351 request.go:683] "Waited before sending request" delay="195.270589ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:58:31.176896  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.373184  722351 request.go:683] "Waited before sending request" delay="196.155083ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834"
	I0916 23:58:31.573091  722351 request.go:683] "Waited before sending request" delay="196.292532ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:31.576254  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834" is "Ready"
	I0916 23:58:31.576280  722351 pod_ready.go:86] duration metric: took 399.33205ms for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.576288  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.773718  722351 request.go:683] "Waited before sending request" delay="197.34633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m02"
	I0916 23:58:31.973716  722351 request.go:683] "Waited before sending request" delay="196.477986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:31.978504  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m02" is "Ready"
	I0916 23:58:31.978555  722351 pod_ready.go:86] duration metric: took 402.258846ms for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.978567  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.172964  722351 request.go:683] "Waited before sending request" delay="194.26238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m03"
	I0916 23:58:32.373491  722351 request.go:683] "Waited before sending request" delay="197.345263ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:32.376525  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m03" is "Ready"
	I0916 23:58:32.376552  722351 pod_ready.go:86] duration metric: took 397.9768ms for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.573017  722351 request.go:683] "Waited before sending request" delay="196.299414ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:58:32.577487  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.773969  722351 request.go:683] "Waited before sending request" delay="196.341624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834"
	I0916 23:58:32.973585  722351 request.go:683] "Waited before sending request" delay="196.346276ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:32.977689  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834" is "Ready"
	I0916 23:58:32.977721  722351 pod_ready.go:86] duration metric: took 400.206125ms for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.977735  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.173032  722351 request.go:683] "Waited before sending request" delay="195.180271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m02"
	I0916 23:58:33.373811  722351 request.go:683] "Waited before sending request" delay="197.350717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:33.376722  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m02" is "Ready"
	I0916 23:58:33.376747  722351 pod_ready.go:86] duration metric: took 399.004052ms for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.376756  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.573048  722351 request.go:683] "Waited before sending request" delay="196.186349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m03"
	I0916 23:58:33.773733  722351 request.go:683] "Waited before sending request" delay="197.347012ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:33.776944  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m03" is "Ready"
	I0916 23:58:33.776972  722351 pod_ready.go:86] duration metric: took 400.209131ms for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.973425  722351 request.go:683] "Waited before sending request" delay="196.344301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:58:33.977203  722351 pod_ready.go:83] waiting for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.173688  722351 request.go:683] "Waited before sending request" delay="196.345801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tkhn"
	I0916 23:58:34.373026  722351 request.go:683] "Waited before sending request" delay="196.256084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:34.376079  722351 pod_ready.go:94] pod "kube-proxy-5tkhn" is "Ready"
	I0916 23:58:34.376106  722351 pod_ready.go:86] duration metric: took 398.875647ms for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.376114  722351 pod_ready.go:83] waiting for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.573402  722351 request.go:683] "Waited before sending request" delay="197.174223ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:34.773022  722351 request.go:683] "Waited before sending request" delay="196.289258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:34.973958  722351 request.go:683] "Waited before sending request" delay="97.260541ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:35.173637  722351 request.go:683] "Waited before sending request" delay="196.407064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.573487  722351 request.go:683] "Waited before sending request" delay="193.254271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.973307  722351 request.go:683] "Waited before sending request" delay="93.259111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	W0916 23:58:36.383328  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:38.882062  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:40.882520  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:42.883194  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:45.382843  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:47.882744  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:49.882993  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:51.883265  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:54.383005  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:56.882555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:59.382463  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:01.382897  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:03.883583  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:06.382581  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:08.882275  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:11.382224  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:13.382333  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:15.882727  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:18.383800  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:20.882547  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:22.883081  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:25.383627  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:27.882377  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:29.882787  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:31.884042  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:34.382932  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:36.882730  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:38.882959  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:40.883411  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:43.382771  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:45.882938  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:48.381607  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:50.382229  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:52.382889  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:54.882546  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:56.882802  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:58.882939  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:00.883550  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:03.382872  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:05.383021  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:07.384166  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:09.883064  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:11.884141  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:14.383248  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:16.883441  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:18.884438  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:21.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:23.883713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:26.383093  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:28.883552  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:31.383392  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:33.883626  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:35.883823  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:38.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:40.883430  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:43.383026  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:45.883091  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:48.382865  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:50.882713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:52.882989  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:55.383076  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:57.383555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:59.882704  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:01.883495  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:04.382406  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:06.383424  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:08.883456  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:11.382988  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:13.882379  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:15.883651  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:18.382551  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:20.382997  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:22.882943  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:24.883256  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:27.383660  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:29.882955  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:32.383364  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	I0917 00:01:34.382530  722351 pod_ready.go:94] pod "kube-proxy-d8brp" is "Ready"
	I0917 00:01:34.382562  722351 pod_ready.go:86] duration metric: took 3m0.006439942s for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.382572  722351 pod_ready.go:83] waiting for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.387645  722351 pod_ready.go:94] pod "kube-proxy-h2fxd" is "Ready"
	I0917 00:01:34.387677  722351 pod_ready.go:86] duration metric: took 5.098826ms for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.390707  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396086  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834" is "Ready"
	I0917 00:01:34.396115  722351 pod_ready.go:86] duration metric: took 5.379692ms for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396126  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400646  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m02" is "Ready"
	I0917 00:01:34.400670  722351 pod_ready.go:86] duration metric: took 4.536355ms for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400680  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.577209  722351 request.go:683] "Waited before sending request" delay="174.117357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0917 00:01:34.580767  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m03" is "Ready"
	I0917 00:01:34.580796  722351 pod_ready.go:86] duration metric: took 180.109317ms for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.580808  722351 pod_ready.go:40] duration metric: took 3m4.808720134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:34.629691  722351 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:01:34.631405  722351 out.go:179] * Done! kubectl is now configured to use "ha-198834" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50aecbe9f874a63c5159d55af06211bca7903e623f01f1e603f267caaf6da9a7/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.259744438Z" level=info msg="ignoring event" container=fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.275867775Z" level=info msg="ignoring event" container=64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.320870537Z" level=info msg="ignoring event" container=310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.336829292Z" level=info msg="ignoring event" container=a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687384709Z" level=info msg="ignoring event" container=11889e34950f849cf7805c6d56f1957ad9d5af727f4810f2da728671398b9f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687719889Z" level=info msg="ignoring event" container=1ccdf9f33d5601763297f230a2f6e51620db2ed183e9f4b9179f4ccef579dfac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756623723Z" level=info msg="ignoring event" container=bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756673284Z" level=info msg="ignoring event" container=870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:01:36 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:01:37 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:37Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         5 minutes ago        Running             coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         5 minutes ago        Running             coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	1ccdf9f33d560       52546a367cc9e                                                                                         5 minutes ago        Exited              coredns                   1                   bf6d6b59f2413       coredns-66bc5c9577-mjbz6
	11889e34950f8       52546a367cc9e                                                                                         5 minutes ago        Exited              coredns                   1                   870758f308362       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              5 minutes ago        Running             kindnet-cni               0                   f541f878be896       kindnet-h28vp
	b16ddbbc469c5       6e38f40d628db                                                                                         5 minutes ago        Running             storage-provisioner       0                   50aecbe9f874a       storage-provisioner
	2da683f529549       df0860106674d                                                                                         5 minutes ago        Running             kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	8a32665f7e3e4       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     6 minutes ago        Running             kube-vip                  0                   5e4aed7a38e18       kube-vip-ha-198834
	4f536df8f44eb       a0af72f2ec6d6                                                                                         6 minutes ago        Running             kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         6 minutes ago        Running             kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         6 minutes ago        Running             etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         6 minutes ago        Running             kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [11889e34950f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50107 - 45856 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000165011s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50484 - 7509 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000096464s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [1ccdf9f33d56] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49262 - 38359 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000112146s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:51442 - 41164 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000125545s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	
	
	==> coredns [f4f7ea59034e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3525bf030f0d49c1ab057441433c477c
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m59s
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m59s
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m5s
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m59s
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m58s  kube-proxy       
	  Normal  Starting                 6m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m5s   kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s   kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s   kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m     node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m31s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m     node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 35caf7934a824e33949ce426f7316bfd
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m27s
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m30s
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m24s  kube-proxy       
	  Normal  RegisteredNode  5m26s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  5m25s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  5m     node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	Name:               ha-198834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-198834-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4e7dc065e4fa49595825994457b8e
	  System UUID:                6f810798-3461-44d1-91c3-d55b483ec842
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l2jn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 etcd-ha-198834-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m54s
	  kube-system                 kindnet-67fn9                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m49s
	  kube-system                 kube-apiserver-ha-198834-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-controller-manager-ha-198834-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-d8brp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-ha-198834-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-vip-ha-198834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  4m56s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  4m55s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  4m55s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"info","ts":"2025-09-16T23:58:12.665306Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.670540Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-16T23:58:12.671162Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.670991Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":1384448,"size":"1.4 MB"}
	{"level":"info","ts":"2025-09-16T23:58:12.677546Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":686,"remote-peer-id":"b3d041dbb5a11c89","bytes":1393601,"size":"1.4 MB"}
	{"level":"warn","ts":"2025-09-16T23:58:12.688158Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:58:12.688674Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:58:12.699050Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:58:12.699094Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.699108Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702028Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:58:12.702080Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702094Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.733438Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.736369Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-16T23:58:12.759123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:34222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:58:12.760774Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(5981864578030751937 12593026477526642892 12956928539845794953)"}
	{"level":"info","ts":"2025-09-16T23:58:12.760967Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.761007Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:19.991223Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:25.496900Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:30.072550Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:32.068856Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:40.123997Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:42.678047Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","bytes":1393601,"size":"1.4 MB","took":"30.013494343s"}
	
	
	==> kernel <==
	 00:03:24 up  2:45,  0 users,  load average: 2.29, 1.39, 1.12
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:02:40.420211       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:02:50.424337       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:02:50.424386       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:02:50.424593       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:02:50.424610       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:02:50.424745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:50.424758       1 main.go:301] handling current node
	I0917 00:03:00.418533       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:00.418581       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:00.418801       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:00.418814       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:00.418930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:00.418942       1 main.go:301] handling current node
	I0917 00:03:10.423193       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:10.423225       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:10.423436       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:10.423448       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:10.423551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:10.423559       1 main.go:301] handling current node
	I0917 00:03:20.423023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:20.423063       1 main.go:301] handling current node
	I0917 00:03:20.423080       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:20.423085       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:20.423378       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:20.423393       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0916 23:57:19.032951       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0916 23:57:23.344022       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0916 23:57:24.194840       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.200277       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.242655       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0916 23:58:29.048843       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:34.361323       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:36.632983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:02.667929       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:58.976838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:19.218755       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:15.644338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:43.338268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:03:18.851078       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58262: use of closed network connection
	E0917 00:03:19.024113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58282: use of closed network connection
	E0917 00:03:19.194951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58306: use of closed network connection
	E0917 00:03:19.388722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58332: use of closed network connection
	E0917 00:03:19.557698       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58342: use of closed network connection
	E0917 00:03:19.744687       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58348: use of closed network connection
	E0917 00:03:19.919836       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58362: use of closed network connection
	E0917 00:03:20.087518       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58376: use of closed network connection
	E0917 00:03:20.254024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58398: use of closed network connection
	E0917 00:03:22.459781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48968: use of closed network connection
	E0917 00:03:22.632160       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48992: use of closed network connection
	E0917 00:03:22.799975       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:49024: use of closed network connection
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.036759       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.036813       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5897933c-61bc-4eef-8922-66c37ba68c57(kube-system/kindnet-rwc59) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	E0916 23:58:30.036834       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	I0916 23:58:30.038109       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.048424       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:30.048665       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4edbf3a1-360c-4f5c-81a3-aa63deb9a159(kube-system/kindnet-lpn5v) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	
	
	==> kubelet <==
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349086    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51d39f-7e43-461b-a021-13ddf0cb9845-lib-modules\") pod \"kindnet-h28vp\" (UID: \"6c51d39f-7e43-461b-a021-13ddf0cb9845\") " pod="kube-system/kindnet-h28vp"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349103    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-xtables-lock\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349123    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n49\" (UniqueName: \"kubernetes.io/projected/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-kube-api-access-84n49\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650251    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-config-volume\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650425    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5ns\" (UniqueName: \"kubernetes.io/projected/c918625f-be11-44bf-8b82-d4c21b8993d1-kube-api-access-th5ns\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650660    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c918625f-be11-44bf-8b82-d4c21b8993d1-config-volume\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650701    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmb4\" (UniqueName: \"kubernetes.io/projected/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-kube-api-access-xhmb4\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.014693    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tkhn" podStartSLOduration=1.014665687 podStartE2EDuration="1.014665687s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:24.932304069 +0000 UTC m=+6.176281069" watchObservedRunningTime="2025-09-16 23:57:25.014665687 +0000 UTC m=+6.258642688"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.042478    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.046332    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f541f878be89694936d8219d8e7fc682a8a169d9edf6417f067927aa4748c0ae"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153403    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrvp\" (UniqueName: \"kubernetes.io/projected/6b6f64f3-2647-4e13-be41-47fcc6111f3e-kube-api-access-jqrvp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153458    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b6f64f3-2647-4e13-be41-47fcc6111f3e-tmp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098005    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5wx4k" podStartSLOduration=2.097979793 podStartE2EDuration="2.097979793s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.086842117 +0000 UTC m=+7.330819118" watchObservedRunningTime="2025-09-16 23:57:26.097979793 +0000 UTC m=+7.341956793"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098130    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098124108 podStartE2EDuration="1.098124108s" podCreationTimestamp="2025-09-16 23:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.097817254 +0000 UTC m=+7.341794256" watchObservedRunningTime="2025-09-16 23:57:26.098124108 +0000 UTC m=+7.342101108"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.159968    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mjbz6" podStartSLOduration=5.159946005 podStartE2EDuration="5.159946005s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.124330373 +0000 UTC m=+7.368307374" watchObservedRunningTime="2025-09-16 23:57:29.159946005 +0000 UTC m=+10.403923006"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.193262    2468 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.194144    2468 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 23:57:30 ha-198834 kubelet[2468]: I0916 23:57:30.158085    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h28vp" podStartSLOduration=1.342825895 podStartE2EDuration="6.158061718s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="2025-09-16 23:57:24.955662014 +0000 UTC m=+6.199639012" lastFinishedPulling="2025-09-16 23:57:29.770897851 +0000 UTC m=+11.014874835" observedRunningTime="2025-09-16 23:57:30.157595407 +0000 UTC m=+11.401572408" watchObservedRunningTime="2025-09-16 23:57:30.158061718 +0000 UTC m=+11.402038720"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.230434    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.258365    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370599    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370662    2468 scope.go:117] "RemoveContainer" containerID="fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.388953    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.389033    2468 scope.go:117] "RemoveContainer" containerID="64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea"
	Sep 17 00:01:35 ha-198834 kubelet[2468]: I0917 00:01:35.703764    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5r6\" (UniqueName: \"kubernetes.io/projected/a7cf1231-2a12-4247-a01a-2c2f02f5f2d8-kube-api-access-vt5r6\") pod \"busybox-7b57f96db7-pstjp\" (UID: \"a7cf1231-2a12-4247-a01a-2c2f02f5f2d8\") " pod="default/busybox-7b57f96db7-pstjp"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (2.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 node add --alsologtostderr -v 5: exit status 80 (27.187506535s)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster ha-198834 as [worker]
	* Starting "ha-198834-m04" worker node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...
	* Stopping node "ha-198834-m04"  ...
	* Deleting "ha-198834-m04" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:03:24.771762  741759 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:03:24.772056  741759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:03:24.772067  741759 out.go:374] Setting ErrFile to fd 2...
	I0917 00:03:24.772071  741759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:03:24.772272  741759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:03:24.772605  741759 mustload.go:65] Loading cluster: ha-198834
	I0917 00:03:24.773069  741759 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:03:24.773497  741759 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:03:24.791583  741759 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:03:24.791854  741759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:03:24.847225  741759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:03:24.836982821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:03:24.847706  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:03:24.867417  741759 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:03:24.868009  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:03:24.885507  741759 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:03:24.885809  741759 api_server.go:166] Checking apiserver status ...
	I0917 00:03:24.885865  741759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:03:24.885918  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:03:24.902854  741759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:03:25.004147  741759 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:03:25.014587  741759 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:03:25.014640  741759 ssh_runner.go:195] Run: ls
	I0917 00:03:25.018442  741759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:03:25.022668  741759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:03:25.024519  741759 out.go:179] * Adding node m04 to cluster ha-198834 as [worker]
	I0917 00:03:25.026089  741759 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:03:25.026204  741759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:03:25.027763  741759 out.go:179] * Starting "ha-198834-m04" worker node in "ha-198834" cluster
	I0917 00:03:25.029093  741759 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:03:25.030413  741759 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:03:25.031515  741759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:03:25.031556  741759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:03:25.031576  741759 cache.go:58] Caching tarball of preloaded images
	I0917 00:03:25.031594  741759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:03:25.031667  741759 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:03:25.031680  741759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:03:25.031793  741759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:03:25.051455  741759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:03:25.051476  741759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:03:25.051493  741759 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:03:25.051518  741759 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:03:25.051618  741759 start.go:364] duration metric: took 81.564µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:03:25.051651  741759 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0917 00:03:25.051754  741759 start.go:125] createHost starting for "m04" (driver="docker")
	I0917 00:03:25.053970  741759 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:03:25.054085  741759 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0917 00:03:25.054115  741759 client.go:168] LocalClient.Create starting
	I0917 00:03:25.054212  741759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0917 00:03:25.054243  741759 main.go:141] libmachine: Decoding PEM data...
	I0917 00:03:25.054253  741759 main.go:141] libmachine: Parsing certificate...
	I0917 00:03:25.054318  741759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0917 00:03:25.054338  741759 main.go:141] libmachine: Decoding PEM data...
	I0917 00:03:25.054346  741759 main.go:141] libmachine: Parsing certificate...
	I0917 00:03:25.054563  741759 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:03:25.070460  741759 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0006373e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:03:25.070516  741759 kic.go:121] calculated static IP "192.168.49.5" for the "ha-198834-m04" container
	I0917 00:03:25.070590  741759 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:03:25.088230  741759 cli_runner.go:164] Run: docker volume create ha-198834-m04 --label name.minikube.sigs.k8s.io=ha-198834-m04 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:03:25.107718  741759 oci.go:103] Successfully created a docker volume ha-198834-m04
	I0917 00:03:25.107815  741759 cli_runner.go:164] Run: docker run --rm --name ha-198834-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m04 --entrypoint /usr/bin/test -v ha-198834-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:03:25.518764  741759 oci.go:107] Successfully prepared a docker volume ha-198834-m04
	I0917 00:03:25.518802  741759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:03:25.518827  741759 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:03:25.518934  741759 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:03:29.332091  741759 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.813093898s)
	I0917 00:03:29.332146  741759 kic.go:203] duration metric: took 3.813313251s to extract preloaded images to volume ...
	W0917 00:03:29.332253  741759 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:03:29.332290  741759 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:03:29.332338  741759 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:03:29.384064  741759 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m04 --name ha-198834-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m04 --network ha-198834 --ip 192.168.49.5 --volume ha-198834-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:03:29.652273  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Running}}
	I0917 00:03:29.672431  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:29.691020  741759 cli_runner.go:164] Run: docker exec ha-198834-m04 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:03:29.741180  741759 oci.go:144] the created container "ha-198834-m04" has a running status.
	I0917 00:03:29.741211  741759 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa...
	I0917 00:03:29.806127  741759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:03:29.806213  741759 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:03:30.136047  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:30.154506  741759 cli_runner.go:164] Run: docker inspect ha-198834-m04
	I0917 00:03:30.172103  741759 errors.go:84] Postmortem inspect ("docker inspect ha-198834-m04"): -- stdout --
	[
	    {
	        "Id": "3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682",
	        "Created": "2025-09-17T00:03:29.399808225Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:03:29.433406499Z",
	            "FinishedAt": "2025-09-17T00:03:29.814665456Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682/hosts",
	        "LogPath": "/var/lib/docker/containers/3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682/3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682-json.log",
	        "Name": "/ha-198834-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682",
	                "LowerDir": "/var/lib/docker/overlay2/c6cb9e398a7a6cd23855e4e6324428637c07573eb7e7d74821bf378b86887a44-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6cb9e398a7a6cd23855e4e6324428637c07573eb7e7d74821bf378b86887a44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6cb9e398a7a6cd23855e4e6324428637c07573eb7e7d74821bf378b86887a44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6cb9e398a7a6cd23855e4e6324428637c07573eb7e7d74821bf378b86887a44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834-m04",
	                "Source": "/var/lib/docker/volumes/ha-198834-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834-m04",
	                "name.minikube.sigs.k8s.io": "ha-198834-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834-m04",
	                        "3f64cb252049"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0917 00:03:30.172181  741759 cli_runner.go:164] Run: docker logs --timestamps --details ha-198834-m04
	I0917 00:03:30.190700  741759 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-198834-m04"): -- stdout --
	2025-09-17T00:03:29.645716843Z  + userns=
	2025-09-17T00:03:29.645751180Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-17T00:03:29.649017877Z  + validate_userns
	2025-09-17T00:03:29.649040201Z  + [[ -z '' ]]
	2025-09-17T00:03:29.649042707Z  + return
	2025-09-17T00:03:29.649044616Z  + configure_containerd
	2025-09-17T00:03:29.649046469Z  + local snapshotter=
	2025-09-17T00:03:29.649048123Z  + [[ -n '' ]]
	2025-09-17T00:03:29.649049856Z  + [[ -z '' ]]
	2025-09-17T00:03:29.649434648Z  ++ stat -f -c %T /kind
	2025-09-17T00:03:29.651100705Z  + container_filesystem=overlayfs
	2025-09-17T00:03:29.651121642Z  + [[ overlayfs == \z\f\s ]]
	2025-09-17T00:03:29.651125561Z  + [[ -n '' ]]
	2025-09-17T00:03:29.651158513Z  + configure_proxy
	2025-09-17T00:03:29.651171558Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-17T00:03:29.657499736Z  + [[ ! -z '' ]]
	2025-09-17T00:03:29.657524302Z  + cat
	2025-09-17T00:03:29.659208318Z  + fix_mount
	2025-09-17T00:03:29.659225877Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-17T00:03:29.659228930Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-17T00:03:29.659494300Z  ++ which mount
	2025-09-17T00:03:29.661469346Z  ++ which umount
	2025-09-17T00:03:29.662560106Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-17T00:03:29.668342138Z  ++ which mount
	2025-09-17T00:03:29.670048010Z  ++ which umount
	2025-09-17T00:03:29.671295309Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-17T00:03:29.673054658Z  +++ which mount
	2025-09-17T00:03:29.674197620Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-17T00:03:29.675519453Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-17T00:03:29.675534451Z  + echo 'INFO: remounting /sys read-only'
	2025-09-17T00:03:29.675538258Z  INFO: remounting /sys read-only
	2025-09-17T00:03:29.675541170Z  + mount -o remount,ro /sys
	2025-09-17T00:03:29.677456774Z  + echo 'INFO: making mounts shared'
	2025-09-17T00:03:29.677487690Z  INFO: making mounts shared
	2025-09-17T00:03:29.677491727Z  + mount --make-rshared /
	2025-09-17T00:03:29.678771716Z  + retryable_fix_cgroup
	2025-09-17T00:03:29.679134333Z  ++ seq 0 10
	2025-09-17T00:03:29.679860182Z  + for i in $(seq 0 10)
	2025-09-17T00:03:29.679870656Z  + fix_cgroup
	2025-09-17T00:03:29.679955196Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-17T00:03:29.679965473Z  + echo 'INFO: detected cgroup v2'
	2025-09-17T00:03:29.679968883Z  INFO: detected cgroup v2
	2025-09-17T00:03:29.679986423Z  + return
	2025-09-17T00:03:29.679989406Z  + return
	2025-09-17T00:03:29.680051022Z  + fix_machine_id
	2025-09-17T00:03:29.680060930Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-17T00:03:29.680064325Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-17T00:03:29.680067192Z  + rm -f /etc/machine-id
	2025-09-17T00:03:29.681072844Z  + systemd-machine-id-setup
	2025-09-17T00:03:29.685182646Z  Initializing machine ID from random generator.
	2025-09-17T00:03:29.687584744Z  + fix_product_name
	2025-09-17T00:03:29.687603098Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-17T00:03:29.687663445Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-17T00:03:29.687676292Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-17T00:03:29.687679678Z  + echo kind
	2025-09-17T00:03:29.689121158Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-17T00:03:29.691245280Z  + fix_product_uuid
	2025-09-17T00:03:29.691261796Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-17T00:03:29.691265185Z  + cat /proc/sys/kernel/random/uuid
	2025-09-17T00:03:29.692367153Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-17T00:03:29.692382107Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-17T00:03:29.692384667Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-17T00:03:29.692386622Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-17T00:03:29.693866525Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-17T00:03:29.693880207Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-17T00:03:29.693882790Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-17T00:03:29.693884789Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-17T00:03:29.695413813Z  + select_iptables
	2025-09-17T00:03:29.695426678Z  + local mode num_legacy_lines num_nft_lines
	2025-09-17T00:03:29.696380331Z  ++ grep -c '^-'
	2025-09-17T00:03:29.699273606Z  ++ true
	2025-09-17T00:03:29.699405736Z  + num_legacy_lines=0
	2025-09-17T00:03:29.700372435Z  ++ grep -c '^-'
	2025-09-17T00:03:29.706523524Z  + num_nft_lines=6
	2025-09-17T00:03:29.706546763Z  + '[' 0 -ge 6 ']'
	2025-09-17T00:03:29.706550241Z  + mode=nft
	2025-09-17T00:03:29.706552907Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-17T00:03:29.706555544Z  INFO: setting iptables to detected mode: nft
	2025-09-17T00:03:29.706558144Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:03:29.706602213Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:03:29.706615304Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:03:29.707140251Z  ++ seq 0 15
	2025-09-17T00:03:29.708168836Z  + for i in $(seq 0 15)
	2025-09-17T00:03:29.708185943Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:03:29.711473790Z  + return
	2025-09-17T00:03:29.711517069Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:03:29.711524088Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:03:29.711567535Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:03:29.712059706Z  ++ seq 0 15
	2025-09-17T00:03:29.712814149Z  + for i in $(seq 0 15)
	2025-09-17T00:03:29.712822585Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:03:29.718468503Z  + return
	2025-09-17T00:03:29.718563492Z  + enable_network_magic
	2025-09-17T00:03:29.718576835Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-17T00:03:29.718580541Z  + local docker_host_ip
	2025-09-17T00:03:29.719894102Z  ++ cut '-d ' -f1
	2025-09-17T00:03:29.720060642Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:03:29.720155514Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-17T00:03:29.770591528Z  + docker_host_ip=
	2025-09-17T00:03:29.770623176Z  + [[ -z '' ]]
	2025-09-17T00:03:29.771303961Z  ++ ip -4 route show default
	2025-09-17T00:03:29.771438972Z  ++ cut '-d ' -f3
	2025-09-17T00:03:29.773689443Z  + docker_host_ip=192.168.49.1
	2025-09-17T00:03:29.773880603Z  + iptables-save
	2025-09-17T00:03:29.774317015Z  + iptables-restore
	2025-09-17T00:03:29.776824125Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-17T00:03:29.784510418Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-17T00:03:29.786778759Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-17T00:03:29.788371109Z  + replaced='# Generated by Docker Engine.
	2025-09-17T00:03:29.788391486Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:03:29.788394970Z  # has been modified.
	2025-09-17T00:03:29.788397823Z  
	2025-09-17T00:03:29.788400596Z  nameserver 192.168.49.1
	2025-09-17T00:03:29.788403448Z  search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:03:29.788406584Z  options edns0 trust-ad ndots:0
	2025-09-17T00:03:29.788421903Z  
	2025-09-17T00:03:29.788424657Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:03:29.788427607Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:03:29.788430285Z  # Overrides: []
	2025-09-17T00:03:29.788432934Z  # Option ndots from: internal'
	2025-09-17T00:03:29.788435683Z  + [[ '' == '' ]]
	2025-09-17T00:03:29.788438347Z  + echo '# Generated by Docker Engine.
	2025-09-17T00:03:29.788441208Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:03:29.788444061Z  # has been modified.
	2025-09-17T00:03:29.788447029Z  
	2025-09-17T00:03:29.788449545Z  nameserver 192.168.49.1
	2025-09-17T00:03:29.788452191Z  search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:03:29.788455017Z  options edns0 trust-ad ndots:0
	2025-09-17T00:03:29.788457778Z  
	2025-09-17T00:03:29.788460368Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:03:29.788463320Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:03:29.788466118Z  # Overrides: []
	2025-09-17T00:03:29.788468758Z  # Option ndots from: internal'
	2025-09-17T00:03:29.788647625Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-17T00:03:29.788660978Z  + local files_to_update
	2025-09-17T00:03:29.788664278Z  + local should_fix_certificate=false
	2025-09-17T00:03:29.790120622Z  ++ cut '-d ' -f1
	2025-09-17T00:03:29.790143651Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:03:29.792314670Z  ++++ hostname
	2025-09-17T00:03:29.793121034Z  +++ timeout 5 getent ahostsv4 ha-198834-m04
	2025-09-17T00:03:29.796688007Z  + curr_ipv4=192.168.49.5
	2025-09-17T00:03:29.797196317Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-17T00:03:29.797222177Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-17T00:03:29.797225515Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-17T00:03:29.797228155Z  + [[ -n 192.168.49.5 ]]
	2025-09-17T00:03:29.797230451Z  + echo -n 192.168.49.5
	2025-09-17T00:03:29.798315373Z  ++ cut '-d ' -f1
	2025-09-17T00:03:29.798358925Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:03:29.798977760Z  ++++ hostname
	2025-09-17T00:03:29.799861113Z  +++ timeout 5 getent ahostsv6 ha-198834-m04
	2025-09-17T00:03:29.802987736Z  + curr_ipv6=
	2025-09-17T00:03:29.803001940Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-17T00:03:29.803016646Z  INFO: Detected IPv6 address: 
	2025-09-17T00:03:29.803019685Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-17T00:03:29.803025768Z  + [[ -n '' ]]
	2025-09-17T00:03:29.803028698Z  + false
	2025-09-17T00:03:29.803645178Z  ++ uname -a
	2025-09-17T00:03:29.804574081Z  + echo 'entrypoint completed: Linux ha-198834-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-17T00:03:29.804586314Z  entrypoint completed: Linux ha-198834-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-17T00:03:29.804589777Z  + exec /sbin/init
	2025-09-17T00:03:29.811243047Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-17T00:03:29.811270229Z  Detected virtualization docker.
	2025-09-17T00:03:29.811274052Z  Detected architecture x86-64.
	2025-09-17T00:03:29.811366147Z  
	2025-09-17T00:03:29.811380421Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-17T00:03:29.811384445Z  
	2025-09-17T00:03:29.811792773Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:29.811802880Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:29.811839713Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:29.811848098Z  Exiting PID 1...
	
	-- /stdout --
	I0917 00:03:30.190801  741759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:03:30.245963  741759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:03:30.236357936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:03:30.246077  741759 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:03:30.236357936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Ar
chitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false P
lugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:03:30.246149  741759 network_create.go:284] running [docker network inspect ha-198834-m04] to gather additional debugging logs...
	I0917 00:03:30.246165  741759 cli_runner.go:164] Run: docker network inspect ha-198834-m04
	W0917 00:03:30.261975  741759 cli_runner.go:211] docker network inspect ha-198834-m04 returned with exit code 1
	I0917 00:03:30.262031  741759 network_create.go:287] error running [docker network inspect ha-198834-m04]: docker network inspect ha-198834-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834-m04 not found
	I0917 00:03:30.262055  741759 network_create.go:289] output of [docker network inspect ha-198834-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834-m04 not found
	
	** /stderr **
	I0917 00:03:30.262129  741759 client.go:171] duration metric: took 5.208002756s to LocalClient.Create
	I0917 00:03:32.263387  741759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:03:32.263435  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:32.280683  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:32.280831  741759 retry.go:31] will retry after 288.321785ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:32.569329  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:32.587133  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:32.587260  741759 retry.go:31] will retry after 320.607696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:32.908837  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:32.926423  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:32.926556  741759 retry.go:31] will retry after 616.218376ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:33.543057  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:33.560791  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:33.560964  741759 retry.go:31] will retry after 572.278906ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:34.133724  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:34.153078  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:03:34.153230  741759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:03:34.153255  741759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:34.153320  741759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:03:34.153368  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:34.173303  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:34.173556  741759 retry.go:31] will retry after 143.025025ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:34.317036  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:34.334416  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:34.334541  741759 retry.go:31] will retry after 409.500649ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:34.745125  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:34.763114  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:34.763256  741759 retry.go:31] will retry after 463.72036ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:35.227978  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:35.245585  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:03:35.245696  741759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:03:35.245710  741759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:35.245720  741759 start.go:128] duration metric: took 10.193959866s to createHost
	I0917 00:03:35.245731  741759 start.go:83] releasing machines lock for "ha-198834-m04", held for 10.194102412s
	W0917 00:03:35.245749  741759 start.go:714] error starting host: creating host: create: creating: prepare kic ssh: container name "ha-198834-m04" state Stopped: log: 2025-09-17T00:03:29.811792773Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:29.811802880Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:29.811839713Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:29.811848098Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:03:35.246218  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:35.264027  741759 stop.go:39] StopHost: ha-198834-m04
	W0917 00:03:35.264351  741759 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0917 00:03:35.266368  741759 out.go:179] * Stopping node "ha-198834-m04"  ...
	I0917 00:03:35.267504  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:35.285196  741759 stop.go:87] host is in state Stopped
	I0917 00:03:35.285293  741759 main.go:141] libmachine: Stopping "ha-198834-m04"...
	I0917 00:03:35.285369  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:35.303420  741759 stop.go:66] stop err: Machine "ha-198834-m04" is already stopped.
	I0917 00:03:35.303453  741759 stop.go:69] host is already stopped
	W0917 00:03:36.303624  741759 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0917 00:03:36.305398  741759 out.go:179] * Deleting "ha-198834-m04" in docker ...
	I0917 00:03:36.306651  741759 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-198834-m04
	I0917 00:03:36.323582  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:36.341248  741759 cli_runner.go:164] Run: docker exec --privileged -t ha-198834-m04 /bin/bash -c "sudo init 0"
	W0917 00:03:36.358868  741759 cli_runner.go:211] docker exec --privileged -t ha-198834-m04 /bin/bash -c "sudo init 0" returned with exit code 1
	I0917 00:03:36.358939  741759 oci.go:659] error shutdown ha-198834-m04: docker exec --privileged -t ha-198834-m04 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3f64cb2520497e4a6fc08060939ca833764f5dfaeac8ac951c42136e317f1682 is not running
	I0917 00:03:37.360140  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:37.377810  741759 oci.go:667] container ha-198834-m04 status is Stopped
	I0917 00:03:37.377840  741759 oci.go:679] Successfully shutdown container ha-198834-m04
	I0917 00:03:37.377899  741759 cli_runner.go:164] Run: docker rm -f -v ha-198834-m04
	I0917 00:03:37.401225  741759 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-198834-m04
	W0917 00:03:37.418979  741759 cli_runner.go:211] docker container inspect -f {{.Id}} ha-198834-m04 returned with exit code 1
	I0917 00:03:37.419073  741759 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:03:37.435741  741759 cli_runner.go:164] Run: docker network rm ha-198834
	W0917 00:03:37.452043  741759 cli_runner.go:211] docker network rm ha-198834 returned with exit code 1
	W0917 00:03:37.452178  741759 kic.go:390] failed to remove network (which might be okay) ha-198834: unable to delete a network that is attached to a running container
	W0917 00:03:37.452387  741759 out.go:285] ! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-198834-m04" state Stopped: log: 2025-09-17T00:03:29.811792773Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:29.811802880Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:29.811839713Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:29.811848098Z  Exiting PID 1...: container exited unexpectedly
	! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-198834-m04" state Stopped: log: 2025-09-17T00:03:29.811792773Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:29.811802880Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:29.811839713Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:29.811848098Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:03:37.452407  741759 start.go:729] Will try again in 5 seconds ...
	I0917 00:03:42.454072  741759 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:03:42.454193  741759 start.go:364] duration metric: took 61.853µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:03:42.454224  741759 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0917 00:03:42.454352  741759 start.go:125] createHost starting for "m04" (driver="docker")
	I0917 00:03:42.456303  741759 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:03:42.456439  741759 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0917 00:03:42.456472  741759 client.go:168] LocalClient.Create starting
	I0917 00:03:42.456541  741759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0917 00:03:42.456577  741759 main.go:141] libmachine: Decoding PEM data...
	I0917 00:03:42.456591  741759 main.go:141] libmachine: Parsing certificate...
	I0917 00:03:42.456662  741759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0917 00:03:42.456681  741759 main.go:141] libmachine: Decoding PEM data...
	I0917 00:03:42.456689  741759 main.go:141] libmachine: Parsing certificate...
	I0917 00:03:42.456892  741759 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:03:42.474478  741759 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0018e5050 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:03:42.474514  741759 kic.go:121] calculated static IP "192.168.49.5" for the "ha-198834-m04" container
	I0917 00:03:42.474580  741759 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:03:42.492119  741759 cli_runner.go:164] Run: docker volume create ha-198834-m04 --label name.minikube.sigs.k8s.io=ha-198834-m04 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:03:42.509122  741759 oci.go:103] Successfully created a docker volume ha-198834-m04
	I0917 00:03:42.509222  741759 cli_runner.go:164] Run: docker run --rm --name ha-198834-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m04 --entrypoint /usr/bin/test -v ha-198834-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:03:42.780362  741759 oci.go:107] Successfully prepared a docker volume ha-198834-m04
	I0917 00:03:42.780407  741759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:03:42.780431  741759 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:03:42.780505  741759 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:03:46.060813  741759 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.280260419s)
	I0917 00:03:46.060855  741759 kic.go:203] duration metric: took 3.280418759s to extract preloaded images to volume ...
	W0917 00:03:46.061008  741759 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:03:46.061054  741759 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:03:46.061093  741759 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:03:46.116010  741759 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m04 --name ha-198834-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m04 --network ha-198834 --ip 192.168.49.5 --volume ha-198834-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:03:46.400473  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Running}}
	I0917 00:03:46.420797  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:46.439672  741759 cli_runner.go:164] Run: docker exec ha-198834-m04 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:03:46.487301  741759 oci.go:144] the created container "ha-198834-m04" has a running status.
	I0917 00:03:46.487334  741759 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa...
	I0917 00:03:46.646662  741759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:03:46.646715  741759 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:03:46.933540  741759 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:46.952663  741759 cli_runner.go:164] Run: docker inspect ha-198834-m04
	I0917 00:03:46.970097  741759 errors.go:84] Postmortem inspect ("docker inspect ha-198834-m04"): -- stdout --
	[
	    {
	        "Id": "5c635dd84a3e184a1a21e475d2287fa26b59e5e2aca547462f500e4fd8fdb239",
	        "Created": "2025-09-17T00:03:46.13160687Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:03:46.16554191Z",
	            "FinishedAt": "2025-09-17T00:03:46.597961116Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/5c635dd84a3e184a1a21e475d2287fa26b59e5e2aca547462f500e4fd8fdb239/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c635dd84a3e184a1a21e475d2287fa26b59e5e2aca547462f500e4fd8fdb239/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c635dd84a3e184a1a21e475d2287fa26b59e5e2aca547462f500e4fd8fdb239/hosts",
	        "LogPath": "/var/lib/docker/containers/5c635dd84a3e184a1a21e475d2287fa26b59e5e2aca547462f500e4fd8fdb239/5c635dd84a3e184a1a21e475d2287fa26b59e5e2aca547462f500e4fd8fdb239-json.log",
	        "Name": "/ha-198834-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c635dd84a3e184a1a21e475d2287fa26b59e5e2aca547462f500e4fd8fdb239",
	                "LowerDir": "/var/lib/docker/overlay2/1f95f6c7964dccda37ba0eec7d1d41d99d9e42f5e1031f70676dfec609ee284f-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f95f6c7964dccda37ba0eec7d1d41d99d9e42f5e1031f70676dfec609ee284f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f95f6c7964dccda37ba0eec7d1d41d99d9e42f5e1031f70676dfec609ee284f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f95f6c7964dccda37ba0eec7d1d41d99d9e42f5e1031f70676dfec609ee284f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834-m04",
	                "Source": "/var/lib/docker/volumes/ha-198834-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834-m04",
	                "name.minikube.sigs.k8s.io": "ha-198834-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834-m04",
	                        "5c635dd84a3e"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0917 00:03:46.970180  741759 cli_runner.go:164] Run: docker logs --timestamps --details ha-198834-m04
	I0917 00:03:46.990439  741759 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-198834-m04"): -- stdout --
	2025-09-17T00:03:46.393457137Z  + userns=
	2025-09-17T00:03:46.393504550Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-17T00:03:46.395874832Z  + validate_userns
	2025-09-17T00:03:46.395889863Z  + [[ -z '' ]]
	2025-09-17T00:03:46.395892183Z  + return
	2025-09-17T00:03:46.395894183Z  + configure_containerd
	2025-09-17T00:03:46.395896031Z  + local snapshotter=
	2025-09-17T00:03:46.395897768Z  + [[ -n '' ]]
	2025-09-17T00:03:46.395899666Z  + [[ -z '' ]]
	2025-09-17T00:03:46.396425793Z  ++ stat -f -c %T /kind
	2025-09-17T00:03:46.397496882Z  + container_filesystem=overlayfs
	2025-09-17T00:03:46.397513216Z  + [[ overlayfs == \z\f\s ]]
	2025-09-17T00:03:46.397517025Z  + [[ -n '' ]]
	2025-09-17T00:03:46.397519729Z  + configure_proxy
	2025-09-17T00:03:46.397522688Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-17T00:03:46.401344619Z  + [[ ! -z '' ]]
	2025-09-17T00:03:46.401361555Z  + cat
	2025-09-17T00:03:46.402662906Z  + fix_mount
	2025-09-17T00:03:46.402677493Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-17T00:03:46.402680038Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-17T00:03:46.403231730Z  ++ which mount
	2025-09-17T00:03:46.404561922Z  ++ which umount
	2025-09-17T00:03:46.405508241Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-17T00:03:46.411875870Z  ++ which mount
	2025-09-17T00:03:46.413244883Z  ++ which umount
	2025-09-17T00:03:46.414375005Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-17T00:03:46.416332213Z  +++ which mount
	2025-09-17T00:03:46.417347023Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-17T00:03:46.418595382Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-17T00:03:46.418611030Z  + echo 'INFO: remounting /sys read-only'
	2025-09-17T00:03:46.418618898Z  INFO: remounting /sys read-only
	2025-09-17T00:03:46.418622508Z  + mount -o remount,ro /sys
	2025-09-17T00:03:46.420772576Z  + echo 'INFO: making mounts shared'
	2025-09-17T00:03:46.420787551Z  INFO: making mounts shared
	2025-09-17T00:03:46.420790240Z  + mount --make-rshared /
	2025-09-17T00:03:46.422577079Z  + retryable_fix_cgroup
	2025-09-17T00:03:46.422982736Z  ++ seq 0 10
	2025-09-17T00:03:46.423966281Z  + for i in $(seq 0 10)
	2025-09-17T00:03:46.423983028Z  + fix_cgroup
	2025-09-17T00:03:46.424037360Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-17T00:03:46.424046883Z  + echo 'INFO: detected cgroup v2'
	2025-09-17T00:03:46.424049690Z  INFO: detected cgroup v2
	2025-09-17T00:03:46.424066868Z  + return
	2025-09-17T00:03:46.424084924Z  + return
	2025-09-17T00:03:46.424088437Z  + fix_machine_id
	2025-09-17T00:03:46.424091383Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-17T00:03:46.424191296Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-17T00:03:46.424202785Z  + rm -f /etc/machine-id
	2025-09-17T00:03:46.425411799Z  + systemd-machine-id-setup
	2025-09-17T00:03:46.430065644Z  Initializing machine ID from random generator.
	2025-09-17T00:03:46.432436097Z  + fix_product_name
	2025-09-17T00:03:46.432453562Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-17T00:03:46.432457042Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-17T00:03:46.432460092Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-17T00:03:46.432462848Z  + echo kind
	2025-09-17T00:03:46.433598655Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-17T00:03:46.435136274Z  + fix_product_uuid
	2025-09-17T00:03:46.435149333Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-17T00:03:46.435151615Z  + cat /proc/sys/kernel/random/uuid
	2025-09-17T00:03:46.436224157Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-17T00:03:46.436235012Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-17T00:03:46.436237286Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-17T00:03:46.436239464Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-17T00:03:46.437848778Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-17T00:03:46.437865181Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-17T00:03:46.437868938Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-17T00:03:46.437871973Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-17T00:03:46.439316697Z  + select_iptables
	2025-09-17T00:03:46.439332502Z  + local mode num_legacy_lines num_nft_lines
	2025-09-17T00:03:46.440298497Z  ++ grep -c '^-'
	2025-09-17T00:03:46.443203213Z  ++ true
	2025-09-17T00:03:46.443440720Z  + num_legacy_lines=0
	2025-09-17T00:03:46.444378438Z  ++ grep -c '^-'
	2025-09-17T00:03:46.450076067Z  + num_nft_lines=6
	2025-09-17T00:03:46.450103620Z  + '[' 0 -ge 6 ']'
	2025-09-17T00:03:46.450113655Z  + mode=nft
	2025-09-17T00:03:46.450116702Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-17T00:03:46.450119515Z  INFO: setting iptables to detected mode: nft
	2025-09-17T00:03:46.450121754Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:03:46.450257924Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:03:46.450277986Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:03:46.450685928Z  ++ seq 0 15
	2025-09-17T00:03:46.451517003Z  + for i in $(seq 0 15)
	2025-09-17T00:03:46.451532849Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:03:46.452837425Z  + return
	2025-09-17T00:03:46.452853583Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:03:46.452862371Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:03:46.452865520Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:03:46.453367075Z  ++ seq 0 15
	2025-09-17T00:03:46.454107994Z  + for i in $(seq 0 15)
	2025-09-17T00:03:46.454121175Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:03:46.455453461Z  + return
	2025-09-17T00:03:46.455479659Z  + enable_network_magic
	2025-09-17T00:03:46.455483716Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-17T00:03:46.455549079Z  + local docker_host_ip
	2025-09-17T00:03:46.457080461Z  ++ cut '-d ' -f1
	2025-09-17T00:03:46.457116173Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:03:46.457120614Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-17T00:03:46.554502840Z  + docker_host_ip=
	2025-09-17T00:03:46.554536630Z  + [[ -z '' ]]
	2025-09-17T00:03:46.555236214Z  ++ ip -4 route show default
	2025-09-17T00:03:46.555375074Z  ++ cut '-d ' -f3
	2025-09-17T00:03:46.557545780Z  + docker_host_ip=192.168.49.1
	2025-09-17T00:03:46.557837195Z  + iptables-save
	2025-09-17T00:03:46.558295933Z  + iptables-restore
	2025-09-17T00:03:46.560533471Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-17T00:03:46.570432326Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-17T00:03:46.572272754Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-17T00:03:46.573505444Z  + replaced='# Generated by Docker Engine.
	2025-09-17T00:03:46.573518966Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:03:46.573522379Z  # has been modified.
	2025-09-17T00:03:46.573525324Z  
	2025-09-17T00:03:46.573528056Z  nameserver 192.168.49.1
	2025-09-17T00:03:46.573530869Z  search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:03:46.573533792Z  options edns0 trust-ad ndots:0
	2025-09-17T00:03:46.573548220Z  
	2025-09-17T00:03:46.573551114Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:03:46.573554146Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:03:46.573556987Z  # Overrides: []
	2025-09-17T00:03:46.573559628Z  # Option ndots from: internal'
	2025-09-17T00:03:46.573562316Z  + [[ '' == '' ]]
	2025-09-17T00:03:46.573564981Z  + echo '# Generated by Docker Engine.
	2025-09-17T00:03:46.573567770Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:03:46.573570639Z  # has been modified.
	2025-09-17T00:03:46.573573263Z  
	2025-09-17T00:03:46.573575886Z  nameserver 192.168.49.1
	2025-09-17T00:03:46.573578660Z  search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:03:46.573581619Z  options edns0 trust-ad ndots:0
	2025-09-17T00:03:46.573584378Z  
	2025-09-17T00:03:46.573586959Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:03:46.573589887Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:03:46.573592576Z  # Overrides: []
	2025-09-17T00:03:46.573595164Z  # Option ndots from: internal'
	2025-09-17T00:03:46.573692134Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-17T00:03:46.573702168Z  + local files_to_update
	2025-09-17T00:03:46.573704259Z  + local should_fix_certificate=false
	2025-09-17T00:03:46.574876850Z  ++ cut '-d ' -f1
	2025-09-17T00:03:46.574901815Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:03:46.575382397Z  ++++ hostname
	2025-09-17T00:03:46.576260081Z  +++ timeout 5 getent ahostsv4 ha-198834-m04
	2025-09-17T00:03:46.579044096Z  + curr_ipv4=192.168.49.5
	2025-09-17T00:03:46.579057989Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-17T00:03:46.579060460Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-17T00:03:46.579062672Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-17T00:03:46.579064405Z  + [[ -n 192.168.49.5 ]]
	2025-09-17T00:03:46.579066090Z  + echo -n 192.168.49.5
	2025-09-17T00:03:46.580234959Z  ++ cut '-d ' -f1
	2025-09-17T00:03:46.580250152Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:03:46.580749743Z  ++++ hostname
	2025-09-17T00:03:46.581581009Z  +++ timeout 5 getent ahostsv6 ha-198834-m04
	2025-09-17T00:03:46.584256978Z  + curr_ipv6=
	2025-09-17T00:03:46.584273242Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-17T00:03:46.584288230Z  INFO: Detected IPv6 address: 
	2025-09-17T00:03:46.584290668Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-17T00:03:46.584292553Z  + [[ -n '' ]]
	2025-09-17T00:03:46.584294508Z  + false
	2025-09-17T00:03:46.584713109Z  ++ uname -a
	2025-09-17T00:03:46.585601408Z  + echo 'entrypoint completed: Linux ha-198834-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-17T00:03:46.585617582Z  entrypoint completed: Linux ha-198834-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-17T00:03:46.585621660Z  + exec /sbin/init
	2025-09-17T00:03:46.593523876Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-17T00:03:46.593547736Z  Detected virtualization docker.
	2025-09-17T00:03:46.593551192Z  Detected architecture x86-64.
	2025-09-17T00:03:46.593629852Z  
	2025-09-17T00:03:46.593633747Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-17T00:03:46.593637415Z  
	2025-09-17T00:03:46.594243643Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:46.594263442Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:46.594271194Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:46.594274723Z  Exiting PID 1...
	
	-- /stdout --
	I0917 00:03:46.990564  741759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:03:47.044392  741759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:03:47.03451417 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:03:47.044469  741759 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:03:47.03451417 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Arc
hitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Pl
ugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:03:47.044548  741759 network_create.go:284] running [docker network inspect ha-198834-m04] to gather additional debugging logs...
	I0917 00:03:47.044566  741759 cli_runner.go:164] Run: docker network inspect ha-198834-m04
	W0917 00:03:47.060926  741759 cli_runner.go:211] docker network inspect ha-198834-m04 returned with exit code 1
	I0917 00:03:47.060962  741759 network_create.go:287] error running [docker network inspect ha-198834-m04]: docker network inspect ha-198834-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834-m04 not found
	I0917 00:03:47.060980  741759 network_create.go:289] output of [docker network inspect ha-198834-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834-m04 not found
	
	** /stderr **
	I0917 00:03:47.061044  741759 client.go:171] duration metric: took 4.604563264s to LocalClient.Create
	I0917 00:03:49.062086  741759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:03:49.062162  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:49.080071  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:49.080236  741759 retry.go:31] will retry after 228.288498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:49.309719  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:49.326640  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:49.326765  741759 retry.go:31] will retry after 361.68126ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:49.689408  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:49.708372  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:49.708512  741759 retry.go:31] will retry after 443.285894ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:50.152082  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:50.169471  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:50.169581  741759 retry.go:31] will retry after 508.052916ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:50.678068  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:50.696207  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:03:50.696337  741759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:03:50.696358  741759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:50.696411  741759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:03:50.696465  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:50.715685  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:50.715795  741759 retry.go:31] will retry after 372.923572ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:51.089473  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:51.107670  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:51.107800  741759 retry.go:31] will retry after 375.751181ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:51.484160  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:51.501897  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:03:51.502037  741759 retry.go:31] will retry after 379.861894ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:51.882420  741759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:03:51.900679  741759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:03:51.900809  741759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:03:51.900823  741759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:03:51.900833  741759 start.go:128] duration metric: took 9.446474782s to createHost
	I0917 00:03:51.900842  741759 start.go:83] releasing machines lock for "ha-198834-m04", held for 9.44664221s
	W0917 00:03:51.900951  741759 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-198834" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-198834-m04" state Stopped: log: 2025-09-17T00:03:46.594243643Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:46.594263442Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:46.594271194Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:46.594274723Z  Exiting PID 1...: container exited unexpectedly
	* Failed to start docker container. Running "minikube delete -p ha-198834" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-198834-m04" state Stopped: log: 2025-09-17T00:03:46.594243643Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:46.594263442Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:46.594271194Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:46.594274723Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:03:51.903541  741759 out.go:203] 
	W0917 00:03:51.905079  741759 out.go:285] X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-198834-m04" state Stopped: log: 2025-09-17T00:03:46.594243643Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:46.594263442Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:46.594271194Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:46.594274723Z  Exiting PID 1...: container exited unexpectedly
	X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-198834-m04" state Stopped: log: 2025-09-17T00:03:46.594243643Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:03:46.594263442Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:03:46.594271194Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:03:46.594274723Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:03:51.906597  741759 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-198834 node add --alsologtostderr -v 5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:57:02.530585618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6698b0ad85a9078b37114c4e66646c6dc7a67a706d28557d80b29fea1d15d512",
	            "SandboxKey": "/var/run/docker/netns/6698b0ad85a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:eb:f5:3a:ee:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "669cb4f772890bad35a4ad4cdb1934f42912d7e03fc353fd08c3e3a046cfba54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.036517868s)
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.io                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.io                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.io                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-pstjp -- nslookup kubernetes.default.svc.cluster.local                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-kg4q6 -- sh -c ping -c 1 192.168.49.1                                        │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ kubectl │ ha-198834 kubectl -- exec busybox-7b57f96db7-l2jn5 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ node    │ ha-198834 node add --alsologtostderr -v 5                                                                                 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:58.042095  722351 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:58.042245  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042257  722351 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:58.042263  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042455  722351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:58.043028  722351 out.go:368] Setting JSON to false
	I0916 23:56:58.043951  722351 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9550,"bootTime":1758057468,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:58.044043  722351 start.go:140] virtualization: kvm guest
	I0916 23:56:58.045935  722351 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:58.047229  722351 notify.go:220] Checking for updates...
	I0916 23:56:58.047241  722351 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:58.048693  722351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:58.049858  722351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:58.051172  722351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:58.052335  722351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:58.053390  722351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:58.054603  722351 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:58.077260  722351 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:58.077444  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.132853  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.122248025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.132998  722351 docker.go:318] overlay module found
	I0916 23:56:58.135611  722351 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:58.136750  722351 start.go:304] selected driver: docker
	I0916 23:56:58.136770  722351 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:58.136782  722351 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:58.137364  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.190249  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.179811473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.190455  722351 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:58.190736  722351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:58.192641  722351 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:58.193978  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:56:58.194069  722351 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:58.194094  722351 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:58.194188  722351 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:58.195605  722351 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0916 23:56:58.196688  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:56:58.197669  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:58.198952  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.199018  722351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0916 23:56:58.199034  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:58.199064  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:58.199149  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:58.199167  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:56:58.199618  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:56:58.199650  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json: {Name:mkfd30616e0167206552e80675557cfcc4fee172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:58.218451  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:58.218470  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:58.218487  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:58.218525  722351 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:58.218643  722351 start.go:364] duration metric: took 94.227µs to acquireMachinesLock for "ha-198834"
	I0916 23:56:58.218683  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:56:58.218779  722351 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:58.220943  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:58.221292  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:56:58.221335  722351 client.go:168] LocalClient.Create starting
	I0916 23:56:58.221405  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:56:58.221441  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221461  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221543  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:56:58.221570  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221588  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221956  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:58.238665  722351 cli_runner.go:211] docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:58.238743  722351 network_create.go:284] running [docker network inspect ha-198834] to gather additional debugging logs...
	I0916 23:56:58.238769  722351 cli_runner.go:164] Run: docker network inspect ha-198834
	W0916 23:56:58.254999  722351 cli_runner.go:211] docker network inspect ha-198834 returned with exit code 1
	I0916 23:56:58.255086  722351 network_create.go:287] error running [docker network inspect ha-198834]: docker network inspect ha-198834: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834 not found
	I0916 23:56:58.255122  722351 network_create.go:289] output of [docker network inspect ha-198834]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834 not found
	
	** /stderr **
	I0916 23:56:58.255285  722351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:58.272422  722351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b56820}
	I0916 23:56:58.272473  722351 network_create.go:124] attempt to create docker network ha-198834 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:58.272524  722351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-198834 ha-198834
	I0916 23:56:58.332062  722351 network_create.go:108] docker network ha-198834 192.168.49.0/24 created
	I0916 23:56:58.332109  722351 kic.go:121] calculated static IP "192.168.49.2" for the "ha-198834" container
	I0916 23:56:58.332180  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:58.347722  722351 cli_runner.go:164] Run: docker volume create ha-198834 --label name.minikube.sigs.k8s.io=ha-198834 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:58.365722  722351 oci.go:103] Successfully created a docker volume ha-198834
	I0916 23:56:58.365811  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --entrypoint /usr/bin/test -v ha-198834:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:58.752716  722351 oci.go:107] Successfully prepared a docker volume ha-198834
	I0916 23:56:58.752766  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.752791  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:58.752860  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:02.431811  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.678879308s)
	I0916 23:57:02.431852  722351 kic.go:203] duration metric: took 3.679056906s to extract preloaded images to volume ...
	W0916 23:57:02.431981  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:02.432030  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:02.432094  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:02.483868  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834 --name ha-198834 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834 --network ha-198834 --ip 192.168.49.2 --volume ha-198834:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:02.749244  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Running}}
	I0916 23:57:02.769059  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:02.787342  722351 cli_runner.go:164] Run: docker exec ha-198834 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:02.836161  722351 oci.go:144] the created container "ha-198834" has a running status.
	I0916 23:57:02.836195  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa...
	I0916 23:57:03.023198  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:03.023332  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:03.051071  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.071057  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:03.071081  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:03.121506  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.138447  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:03.138553  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.156407  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.156657  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.156674  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:03.295893  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.295938  722351 ubuntu.go:182] provisioning hostname "ha-198834"
	I0916 23:57:03.296023  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.314748  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.314993  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.315008  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0916 23:57:03.463642  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.463716  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.480946  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.481224  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.481264  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:03.616528  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:03.616561  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:03.616587  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:03.616603  722351 provision.go:84] configureAuth start
	I0916 23:57:03.616666  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:03.633505  722351 provision.go:143] copyHostCerts
	I0916 23:57:03.633553  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633590  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:03.633601  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633689  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:03.633796  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633824  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:03.633834  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633870  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:03.633969  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.633996  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:03.634007  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.634050  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:03.634188  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0916 23:57:03.786555  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:03.786617  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:03.786691  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.804115  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:03.900955  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:03.901014  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:57:03.928655  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:03.928721  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:03.953468  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:03.953537  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:03.978330  722351 provision.go:87] duration metric: took 361.708211ms to configureAuth
	I0916 23:57:03.978356  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:03.978536  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:03.978599  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.995700  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.995934  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.995954  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:04.131514  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:04.131541  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:04.131675  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:04.131752  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.148752  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.148996  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.149060  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:04.298185  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:04.298270  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.315091  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.315309  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.315326  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:05.420254  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:04.295122578 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:05.420296  722351 machine.go:96] duration metric: took 2.281822221s to provisionDockerMachine
	I0916 23:57:05.420315  722351 client.go:171] duration metric: took 7.198967751s to LocalClient.Create
	I0916 23:57:05.420340  722351 start.go:167] duration metric: took 7.199048943s to libmachine.API.Create "ha-198834"
	I0916 23:57:05.420350  722351 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0916 23:57:05.420364  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:05.420443  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:05.420495  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.437726  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.536164  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:05.539580  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:05.539616  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:05.539633  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:05.539639  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:05.539653  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:05.539713  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:05.539819  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:05.539836  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:05.540001  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:05.548691  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:05.575226  722351 start.go:296] duration metric: took 154.859714ms for postStartSetup
	I0916 23:57:05.575586  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.591876  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:05.592351  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:05.592412  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.609076  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.701881  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:05.706378  722351 start.go:128] duration metric: took 7.487581015s to createHost
	I0916 23:57:05.706400  722351 start.go:83] releasing machines lock for "ha-198834", held for 7.487744986s
	I0916 23:57:05.706457  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.723047  722351 ssh_runner.go:195] Run: cat /version.json
	I0916 23:57:05.723106  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.723117  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:05.723202  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.739830  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.739978  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.900291  722351 ssh_runner.go:195] Run: systemctl --version
	I0916 23:57:05.905029  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:05.909440  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:05.939050  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:05.939153  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:05.968631  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:05.968659  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:05.968693  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:05.968830  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:05.985490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:05.997349  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:06.007949  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:06.008036  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:06.018490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.028804  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:06.039330  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.049816  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:06.059493  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:06.069825  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:06.080461  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:06.091039  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:06.100019  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:06.109126  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.178675  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:06.251706  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:06.251760  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:06.251809  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:06.264383  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.275792  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:06.294666  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.306227  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:06.317564  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:06.334759  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:06.338327  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:06.348543  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:06.366680  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:06.432452  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:06.496386  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:06.496496  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:06.515617  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:06.527317  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.590441  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:07.360810  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:07.372759  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:07.384493  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.396808  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:07.466973  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:07.538629  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.607976  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:07.630119  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:07.642121  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.709050  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:07.784177  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.797686  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:07.797763  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:07.801576  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:07.801630  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:07.804977  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:07.837851  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:07.837957  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.862098  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.888678  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:07.888755  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:07.905526  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:07.909605  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:07.921677  722351 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:57:07.921793  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:07.921842  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.943020  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.943041  722351 docker.go:621] Images already preloaded, skipping extraction
	I0916 23:57:07.943097  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.963583  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.963609  722351 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:57:07.963623  722351 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0916 23:57:07.963750  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:07.963822  722351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 23:57:08.012977  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:08.013007  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:08.013021  722351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:57:08.013044  722351 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:57:08.013180  722351 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:57:08.013203  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:08.013244  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:08.026529  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:08.026652  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:08.026716  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:08.036301  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:08.036379  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:57:08.046128  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 23:57:08.064738  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:08.083216  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:57:08.101114  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:57:08.121332  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:08.125035  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:08.136734  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:08.207460  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:08.231438  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0916 23:57:08.231468  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:08.231491  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.231634  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:08.231682  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:08.231692  722351 certs.go:256] generating profile certs ...
	I0916 23:57:08.231748  722351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:08.231761  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt with IP's: []
	I0916 23:57:08.595971  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt ...
	I0916 23:57:08.596008  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt: {Name:mk045c8005e18afdd173496398fb640e85421530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596237  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key ...
	I0916 23:57:08.596255  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key: {Name:mkec7f349d5172bad8ab50dce27926cf4a2810b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596372  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28
	I0916 23:57:08.596390  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:57:08.930707  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 ...
	I0916 23:57:08.930740  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28: {Name:mke8743bf1c0faa0b20cb0336c0e1879fcb77e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.930956  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 ...
	I0916 23:57:08.930975  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28: {Name:mkd63d446f2fe51bc154cd1e5df7f39c484f911b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.931094  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:08.931221  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:08.931283  722351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:08.931298  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt with IP's: []
	I0916 23:57:09.286083  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt ...
	I0916 23:57:09.286118  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt: {Name:mk7d8f9e6931aff0b35e5110e6bb582a3f00c824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286322  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key ...
	I0916 23:57:09.286339  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key: {Name:mkaeef389ff7f9a0b6729cce56a45b0b3aa13296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286448  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:09.286467  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:09.286479  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:09.286489  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:09.286513  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:09.286527  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:09.286538  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:09.286550  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:09.286602  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:09.286641  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:09.286650  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:09.286674  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:09.286702  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:09.286730  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:09.286767  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:09.286792  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.286805  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.286817  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.287381  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:09.312982  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:09.337940  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:09.362347  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:09.386557  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:57:09.412140  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:09.436893  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:09.461871  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:09.487876  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:09.516060  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:09.541440  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:09.567069  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:57:09.585649  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:09.591504  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:09.602004  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605727  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605791  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.612679  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:09.622556  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:09.632414  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636379  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636441  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.643659  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:09.653893  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:09.663837  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667554  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667899  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.675833  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:09.686032  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:09.689851  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:09.689923  722351 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:09.690062  722351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 23:57:09.708774  722351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:57:09.718368  722351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:57:09.727825  722351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:57:09.727888  722351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:57:09.738106  722351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:57:09.738126  722351 kubeadm.go:157] found existing configuration files:
	
	I0916 23:57:09.738165  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:57:09.747962  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:57:09.748017  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:57:09.757385  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:57:09.766772  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:57:09.766839  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:57:09.775735  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.784848  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:57:09.784955  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.793751  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:57:09.803170  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:57:09.803229  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:57:09.811944  722351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:57:09.867145  722351 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:57:09.919246  722351 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:57:19.614241  722351 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:57:19.614308  722351 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:57:19.614466  722351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:57:19.614561  722351 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:57:19.614607  722351 kubeadm.go:310] OS: Linux
	I0916 23:57:19.614692  722351 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:57:19.614771  722351 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:57:19.614837  722351 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:57:19.614899  722351 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:57:19.614977  722351 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:57:19.615057  722351 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:57:19.615125  722351 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:57:19.615202  722351 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:57:19.615307  722351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:57:19.615454  722351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:57:19.615594  722351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:57:19.615688  722351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:57:19.618162  722351 out.go:252]   - Generating certificates and keys ...
	I0916 23:57:19.618260  722351 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:57:19.618349  722351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:57:19.618445  722351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:57:19.618533  722351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:57:19.618635  722351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:57:19.618717  722351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:57:19.618792  722351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:57:19.618993  722351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619071  722351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:57:19.619249  722351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619335  722351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:57:19.619434  722351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:57:19.619517  722351 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:57:19.619599  722351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:57:19.619679  722351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:57:19.619763  722351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:57:19.619846  722351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:57:19.619990  722351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:57:19.620069  722351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:57:19.620183  722351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:57:19.620281  722351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:57:19.621487  722351 out.go:252]   - Booting up control plane ...
	I0916 23:57:19.621595  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:57:19.621704  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:57:19.621799  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:57:19.621956  722351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:57:19.622047  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:57:19.622137  722351 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:57:19.622213  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:57:19.622246  722351 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:57:19.622371  722351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:57:19.622503  722351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:57:19.622564  722351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000941296s
	I0916 23:57:19.622663  722351 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:57:19.622778  722351 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:57:19.622893  722351 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:57:19.623021  722351 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:57:19.623126  722351 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.545161134s
	I0916 23:57:19.623210  722351 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.1638517s
	I0916 23:57:19.623273  722351 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001738286s
	I0916 23:57:19.623369  722351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:57:19.623478  722351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:57:19.623551  722351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:57:19.623792  722351 kubeadm.go:310] [mark-control-plane] Marking the node ha-198834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:57:19.623845  722351 kubeadm.go:310] [bootstrap-token] Using token: wg2on6.splp3qzu9xv61vdp
	I0916 23:57:19.625599  722351 out.go:252]   - Configuring RBAC rules ...
	I0916 23:57:19.625697  722351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:57:19.625769  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:57:19.625966  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:57:19.626123  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:57:19.626261  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:57:19.626367  722351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:57:19.626473  722351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:57:19.626522  722351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:57:19.626564  722351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:57:19.626570  722351 kubeadm.go:310] 
	I0916 23:57:19.626631  722351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:57:19.626643  722351 kubeadm.go:310] 
	I0916 23:57:19.626737  722351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:57:19.626747  722351 kubeadm.go:310] 
	I0916 23:57:19.626781  722351 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:57:19.626863  722351 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:57:19.626960  722351 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:57:19.626973  722351 kubeadm.go:310] 
	I0916 23:57:19.627050  722351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:57:19.627058  722351 kubeadm.go:310] 
	I0916 23:57:19.627113  722351 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:57:19.627119  722351 kubeadm.go:310] 
	I0916 23:57:19.627167  722351 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:57:19.627238  722351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:57:19.627297  722351 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:57:19.627302  722351 kubeadm.go:310] 
	I0916 23:57:19.627381  722351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:57:19.627449  722351 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:57:19.627454  722351 kubeadm.go:310] 
	I0916 23:57:19.627525  722351 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627618  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0916 23:57:19.627647  722351 kubeadm.go:310] 	--control-plane 
	I0916 23:57:19.627653  722351 kubeadm.go:310] 
	I0916 23:57:19.627725  722351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:57:19.627733  722351 kubeadm.go:310] 
	I0916 23:57:19.627801  722351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627921  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0916 23:57:19.627933  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:19.627939  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:19.630017  722351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:57:19.631017  722351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:57:19.635194  722351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:57:19.635211  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:57:19.655634  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:57:19.855102  722351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:57:19.855186  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:19.855265  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834 minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=true
	I0916 23:57:19.863538  722351 ops.go:34] apiserver oom_adj: -16
	I0916 23:57:19.931275  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.432025  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.932100  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.432105  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.932376  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.432213  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.931583  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.431392  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.932193  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.431927  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.504799  722351 kubeadm.go:1105] duration metric: took 4.649687278s to wait for elevateKubeSystemPrivileges
	I0916 23:57:24.504835  722351 kubeadm.go:394] duration metric: took 14.81493092s to StartCluster
	I0916 23:57:24.504858  722351 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.504967  722351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:57:24.505808  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.506080  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:57:24.506079  722351 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:24.506102  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.506120  722351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:57:24.506215  722351 addons.go:69] Setting storage-provisioner=true in profile "ha-198834"
	I0916 23:57:24.506241  722351 addons.go:238] Setting addon storage-provisioner=true in "ha-198834"
	I0916 23:57:24.506236  722351 addons.go:69] Setting default-storageclass=true in profile "ha-198834"
	I0916 23:57:24.506263  722351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198834"
	I0916 23:57:24.506271  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.506311  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:24.506630  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.506797  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.527476  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:24.528010  722351 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:57:24.528028  722351 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:57:24.528032  722351 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:57:24.528036  722351 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:57:24.528039  722351 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:57:24.528105  722351 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:57:24.528384  722351 addons.go:238] Setting addon default-storageclass=true in "ha-198834"
	I0916 23:57:24.528420  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.528683  722351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:57:24.528891  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.530050  722351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.530067  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:57:24.530109  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.548463  722351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.548490  722351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:57:24.548552  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.551711  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.575963  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.622716  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:57:24.680948  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.725959  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.815565  722351 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:57:25.027949  722351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:57:25.029176  722351 addons.go:514] duration metric: took 523.059617ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:57:25.029216  722351 start.go:246] waiting for cluster config update ...
	I0916 23:57:25.029233  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:25.030834  722351 out.go:203] 
	I0916 23:57:25.032180  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:25.032246  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.033846  722351 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0916 23:57:25.035651  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:25.036699  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:25.038502  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.038524  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:25.038599  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:25.038624  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:25.038635  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:25.038696  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.064556  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:25.064575  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:25.064593  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:25.064625  722351 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:25.064737  722351 start.go:364] duration metric: took 87.928µs to acquireMachinesLock for "ha-198834-m02"
	I0916 23:57:25.064767  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:25.064852  722351 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:57:25.067030  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:25.067261  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:25.067302  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:25.067392  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:25.067435  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067451  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067520  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:25.067544  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067561  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067817  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:25.087287  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0008ae780 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:25.087329  722351 kic.go:121] calculated static IP "192.168.49.3" for the "ha-198834-m02" container
	I0916 23:57:25.087390  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:25.104356  722351 cli_runner.go:164] Run: docker volume create ha-198834-m02 --label name.minikube.sigs.k8s.io=ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:25.128318  722351 oci.go:103] Successfully created a docker volume ha-198834-m02
	I0916 23:57:25.128423  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --entrypoint /usr/bin/test -v ha-198834-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:25.555443  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m02
	I0916 23:57:25.555486  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.555507  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:25.555574  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.769985  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214340138s)
	I0916 23:57:29.770025  722351 kic.go:203] duration metric: took 4.214511914s to extract preloaded images to volume ...
	W0916 23:57:29.770138  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.770180  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.770230  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.831280  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m02 --name ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m02 --network ha-198834 --ip 192.168.49.3 --volume ha-198834-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:30.118263  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Running}}
	I0916 23:57:30.140753  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.161053  722351 cli_runner.go:164] Run: docker exec ha-198834-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:30.204746  722351 oci.go:144] the created container "ha-198834-m02" has a running status.
	I0916 23:57:30.204782  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa...
	I0916 23:57:30.491277  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:30.491341  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:30.523169  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.546155  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:30.546178  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.603616  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.624695  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.624784  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.648569  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.648946  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.648966  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.800750  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.800784  722351 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0916 23:57:30.800873  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.822237  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.822505  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.822519  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0916 23:57:30.984206  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.984307  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.007082  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.007398  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.007430  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:31.152561  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:31.152598  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:31.152624  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:31.152644  722351 provision.go:84] configureAuth start
	I0916 23:57:31.152709  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:31.171931  722351 provision.go:143] copyHostCerts
	I0916 23:57:31.171978  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172008  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:31.172014  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172081  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:31.172159  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172181  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:31.172185  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172216  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:31.172262  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172279  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:31.172287  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172310  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:31.172361  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0916 23:57:31.314068  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:31.314146  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:31.314208  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.336792  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:31.442195  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:31.442269  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:31.472780  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:31.472841  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:31.499569  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:31.499653  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:31.530277  722351 provision.go:87] duration metric: took 377.61476ms to configureAuth
	I0916 23:57:31.530311  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:31.530528  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:31.530587  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.548573  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.548821  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.548841  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:31.695327  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:31.695357  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:31.695559  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:31.695639  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.715926  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.716269  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.716384  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:31.879960  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:31.880054  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.901465  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.901783  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.901817  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:33.107385  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:31.877658246 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:33.107432  722351 machine.go:96] duration metric: took 2.482713737s to provisionDockerMachine
	I0916 23:57:33.107448  722351 client.go:171] duration metric: took 8.040135103s to LocalClient.Create
	I0916 23:57:33.107471  722351 start.go:167] duration metric: took 8.040214449s to libmachine.API.Create "ha-198834"
	I0916 23:57:33.107480  722351 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0916 23:57:33.107493  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:33.107570  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:33.107624  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.129478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.235200  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:33.239799  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:33.239842  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:33.239854  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:33.239862  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:33.239881  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:33.239961  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:33.240070  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:33.240085  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:33.240211  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:33.252619  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:33.291135  722351 start.go:296] duration metric: took 183.636707ms for postStartSetup
	I0916 23:57:33.291600  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.313645  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:33.314041  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:33.314103  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.337314  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.439716  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:33.445408  722351 start.go:128] duration metric: took 8.380530846s to createHost
	I0916 23:57:33.445437  722351 start.go:83] releasing machines lock for "ha-198834-m02", held for 8.380681461s
	I0916 23:57:33.445500  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.469661  722351 out.go:179] * Found network options:
	I0916 23:57:33.471226  722351 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:33.472373  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:33.472429  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:33.472520  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:33.472550  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:33.472570  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.472621  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.495822  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.496478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.601441  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:33.704002  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:33.704085  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:33.742848  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:33.742881  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:33.742929  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:33.743066  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:33.765394  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:33.781702  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:33.796106  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:33.796186  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:33.811490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.825594  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:33.840006  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.853819  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:33.867424  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:33.882022  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:33.896562  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:33.910813  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:33.923436  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:33.936892  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.033978  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:34.137820  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:34.137955  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:34.138026  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:34.154788  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.170769  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:34.190397  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.207526  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:34.224333  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:34.249827  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:34.255532  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:34.270253  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:34.296311  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:34.391517  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:34.486390  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:34.486452  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:34.512957  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:34.529696  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.623612  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:35.389236  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:35.402665  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:35.418828  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.433733  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:35.524509  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:35.615815  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.688879  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:35.713552  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:35.729264  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.818355  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:35.908063  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.921416  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:35.921483  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:35.925600  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:35.925666  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:35.929510  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:35.970926  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:35.971002  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.001052  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.032731  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:36.033881  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:36.035387  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:36.055948  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:36.061767  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:36.076229  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:57:36.076482  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:36.076794  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:36.099199  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:36.099483  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0916 23:57:36.099498  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:36.099514  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.099667  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:36.099721  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:36.099735  722351 certs.go:256] generating profile certs ...
	I0916 23:57:36.099834  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:36.099867  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0916 23:57:36.099889  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:36.171638  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 ...
	I0916 23:57:36.171669  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4: {Name:mk274e4893d598b40c8fed777bc1c7c2e951159a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.171866  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 ...
	I0916 23:57:36.171885  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4: {Name:mkf2a66869f0c345fb28cc9925dc0bb02623a928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.172011  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:36.172195  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:36.172362  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:36.172381  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:36.172396  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:36.172415  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:36.172438  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:36.172457  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:36.172474  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:36.172493  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:36.172512  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:36.172589  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:36.172634  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:36.172648  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:36.172679  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:36.172703  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:36.172736  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:36.172796  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:36.172840  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.172861  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.172878  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.172963  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:36.194873  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:36.286293  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:36.291948  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:36.308150  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:36.312206  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:36.325598  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:36.329618  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:36.346110  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:36.350017  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:36.365628  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:36.369445  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:36.383675  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:36.387388  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:57:36.403394  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:36.432068  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:36.461592  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:36.491261  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:36.523895  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:36.552719  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:36.580284  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:36.608342  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:36.639670  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:36.672003  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:36.703856  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:36.734275  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:36.755638  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:36.777805  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:36.799338  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:36.821463  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:36.843600  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:57:36.867808  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:36.889233  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:36.896091  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:36.908363  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913145  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913212  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.921857  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:36.934186  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:36.945282  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949180  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949249  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.958068  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:36.970160  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:36.981053  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985350  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985410  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.993828  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:37.004616  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:37.008764  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:37.008830  722351 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0916 23:57:37.008961  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:37.008998  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:37.009050  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:37.026582  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:37.026656  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:37.026738  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:37.036867  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:37.036974  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:37.046606  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:57:37.070259  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:37.092325  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:37.116853  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:37.120789  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:37.137396  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:37.223494  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:37.256254  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:37.256574  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:37.256705  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:37.256762  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:37.278264  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:37.435308  722351 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:37.435366  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:54.013635  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.578241326s)
	I0916 23:57:54.013701  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:54.233708  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:57:54.308006  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:54.383356  722351 start.go:319] duration metric: took 17.126777498s to joinCluster
	I0916 23:57:54.383433  722351 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:54.383691  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:54.385020  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:54.386187  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:54.491315  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:54.505328  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:54.505398  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:54.505659  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508947  722351 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0916 23:57:56.508979  722351 node_ready.go:38] duration metric: took 2.003299323s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508998  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:56.509065  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:56.521258  722351 api_server.go:72] duration metric: took 2.137779117s to wait for apiserver process to appear ...
	I0916 23:57:56.521298  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:56.521326  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:56.527086  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:56.528055  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:56.528078  722351 api_server.go:131] duration metric: took 6.77168ms to wait for apiserver health ...
	I0916 23:57:56.528087  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:56.534412  722351 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:56.534478  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.534486  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.534497  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.534503  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.534515  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534524  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.534535  722351 system_pods.go:61] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534541  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.534547  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.534559  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.534564  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.534667  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.534716  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534725  722351 system_pods.go:61] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534731  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.534743  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.534748  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.534753  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.534758  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.534765  722351 system_pods.go:74] duration metric: took 6.672375ms to wait for pod list to return data ...
	I0916 23:57:56.534774  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:56.538351  722351 default_sa.go:45] found service account: "default"
	I0916 23:57:56.538385  722351 default_sa.go:55] duration metric: took 3.603096ms for default service account to be created ...
	I0916 23:57:56.538399  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:56.542274  722351 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:56.542301  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.542307  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.542311  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.542314  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.542321  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542325  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.542330  722351 system_pods.go:89] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542334  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.542338  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.542344  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.542347  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.542351  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.542356  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542367  722351 system_pods.go:89] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542371  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.542375  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.542377  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.542380  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.542384  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.542393  722351 system_pods.go:126] duration metric: took 3.988364ms to wait for k8s-apps to be running ...
	I0916 23:57:56.542403  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:56.542447  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:56.554466  722351 system_svc.go:56] duration metric: took 12.054188ms WaitForService to wait for kubelet
	I0916 23:57:56.554496  722351 kubeadm.go:578] duration metric: took 2.171026353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:56.554519  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:56.557501  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557532  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557552  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557557  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557561  722351 node_conditions.go:105] duration metric: took 3.037317ms to run NodePressure ...
	I0916 23:57:56.557575  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:56.557610  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:56.559549  722351 out.go:203] 
	I0916 23:57:56.561097  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:56.561232  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.562855  722351 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0916 23:57:56.563951  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:56.565051  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:56.566271  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:56.566290  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:56.566373  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:56.566383  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:56.566485  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:56.566581  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.586635  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:56.586656  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:56.586673  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:56.586704  722351 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:56.586811  722351 start.go:364] duration metric: took 87.391µs to acquireMachinesLock for "ha-198834-m03"
	I0916 23:57:56.586843  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:56.587003  722351 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:56.589063  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:56.589158  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:56.589187  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:56.589263  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:56.589299  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589313  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589365  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:56.589385  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589398  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589634  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:56.607248  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc001595440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:56.607297  722351 kic.go:121] calculated static IP "192.168.49.4" for the "ha-198834-m03" container
	I0916 23:57:56.607371  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:56.624198  722351 cli_runner.go:164] Run: docker volume create ha-198834-m03 --label name.minikube.sigs.k8s.io=ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:56.642183  722351 oci.go:103] Successfully created a docker volume ha-198834-m03
	I0916 23:57:56.642258  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --entrypoint /usr/bin/test -v ha-198834-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:57.021785  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m03
	I0916 23:57:57.021834  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:57.021864  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:57.021952  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:59.672995  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.650992477s)
	I0916 23:57:59.673039  722351 kic.go:203] duration metric: took 2.651177157s to extract preloaded images to volume ...
	W0916 23:57:59.673144  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:59.673190  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:59.673255  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:59.730169  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m03 --name ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m03 --network ha-198834 --ip 192.168.49.4 --volume ha-198834-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:58:00.013728  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Running}}
	I0916 23:58:00.034076  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.054832  722351 cli_runner.go:164] Run: docker exec ha-198834-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:58:00.109517  722351 oci.go:144] the created container "ha-198834-m03" has a running status.
	I0916 23:58:00.109546  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa...
	I0916 23:58:00.621029  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:58:00.621097  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:58:00.651614  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.673435  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:58:00.673460  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:58:00.730412  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.749865  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:58:00.750006  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.771445  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.771738  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.771754  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:58:00.920523  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:00.920553  722351 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0916 23:58:00.920616  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.940561  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.940837  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.940853  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0916 23:58:01.103101  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:01.103204  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:01.125182  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:01.125511  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:01.125543  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:58:01.275155  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:01.275201  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:58:01.275231  722351 ubuntu.go:190] setting up certificates
	I0916 23:58:01.275246  722351 provision.go:84] configureAuth start
	I0916 23:58:01.275318  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:01.296305  722351 provision.go:143] copyHostCerts
	I0916 23:58:01.296378  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296426  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:58:01.296439  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296527  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:58:01.296632  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296656  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:58:01.296682  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296726  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:58:01.296788  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296825  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:58:01.296835  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296924  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:58:01.297040  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0916 23:58:02.100987  722351 provision.go:177] copyRemoteCerts
	I0916 23:58:02.101048  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:58:02.101084  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.119475  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:02.218802  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:58:02.218870  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:58:02.251628  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:58:02.251700  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:58:02.279052  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:58:02.279124  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:58:02.305168  722351 provision.go:87] duration metric: took 1.029902032s to configureAuth
	I0916 23:58:02.305208  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:58:02.305440  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:02.305491  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.322139  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.322413  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.322428  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:58:02.459594  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:58:02.459629  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:58:02.459746  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:58:02.459804  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.476657  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.476985  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.477099  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:58:02.633394  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:58:02.633489  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.651145  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.651390  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.651410  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:58:03.800032  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:58:02.631485455 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:58:03.800077  722351 machine.go:96] duration metric: took 3.050188223s to provisionDockerMachine
	I0916 23:58:03.800094  722351 client.go:171] duration metric: took 7.210891992s to LocalClient.Create
	I0916 23:58:03.800121  722351 start.go:167] duration metric: took 7.210962522s to libmachine.API.Create "ha-198834"
	I0916 23:58:03.800131  722351 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0916 23:58:03.800155  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:58:03.800229  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:58:03.800295  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.817949  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:03.918038  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:58:03.922382  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:58:03.922420  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:58:03.922430  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:58:03.922438  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:58:03.922452  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:58:03.922512  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:58:03.922607  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:58:03.922620  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:58:03.922727  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:58:03.932298  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:03.961387  722351 start.go:296] duration metric: took 161.230642ms for postStartSetup
	I0916 23:58:03.961811  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:03.979123  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:58:03.979395  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:58:03.979437  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.997520  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.091253  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:58:04.096537  722351 start.go:128] duration metric: took 7.509514126s to createHost
	I0916 23:58:04.096585  722351 start.go:83] releasing machines lock for "ha-198834-m03", held for 7.509743952s
	I0916 23:58:04.096660  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:04.115702  722351 out.go:179] * Found network options:
	I0916 23:58:04.117029  722351 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:58:04.118232  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118256  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118281  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118299  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:58:04.118395  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:58:04.118441  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.118449  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:58:04.118515  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.136875  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.137594  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.231418  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:58:04.311016  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:58:04.311108  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:58:04.340810  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:58:04.340841  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.340871  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.340997  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.359059  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:58:04.371794  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:58:04.383345  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:58:04.383421  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:58:04.394513  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.405081  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:58:04.415653  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.426510  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:58:04.436405  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:58:04.447135  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:58:04.457926  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:58:04.469563  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:58:04.478599  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:58:04.488307  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:04.557785  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:58:04.636805  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.636855  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.636899  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:58:04.649865  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.662323  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:58:04.680711  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.693319  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:58:04.705665  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.723842  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:58:04.727547  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:58:04.738845  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:58:04.758974  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:58:04.830471  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:58:04.900429  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:58:04.900482  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:58:04.920093  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:58:04.931599  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:05.002855  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:58:05.807532  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:58:05.819728  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:58:05.832303  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:05.844347  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:58:05.916277  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:58:05.988520  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.055206  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:58:06.080490  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:58:06.092817  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.162707  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:58:06.248276  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:06.261931  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:58:06.262000  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:58:06.265868  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:58:06.265941  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:58:06.269385  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:58:06.305058  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:58:06.305139  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.331725  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.358446  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:58:06.359714  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:58:06.360964  722351 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:58:06.362187  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:58:06.379025  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:58:06.383173  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:06.394963  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:58:06.395208  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:06.395415  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:58:06.412700  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:06.412979  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0916 23:58:06.412992  722351 certs.go:194] generating shared ca certs ...
	I0916 23:58:06.413008  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:06.413150  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:58:06.413202  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:58:06.413213  722351 certs.go:256] generating profile certs ...
	I0916 23:58:06.413290  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:58:06.413316  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0916 23:58:06.413331  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:58:07.059616  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 ...
	I0916 23:58:07.059648  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783: {Name:mka6f3e20ae0db98330bce12c7c53c8ceb029f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.059850  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 ...
	I0916 23:58:07.059873  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783: {Name:mk88fba5116449476945068bb066a5fae095ca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.060019  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:58:07.060173  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:58:07.060303  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:58:07.060320  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:58:07.060332  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:58:07.060346  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:58:07.060359  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:58:07.060371  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:58:07.060383  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:58:07.060395  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:58:07.060407  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:58:07.060462  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:58:07.060492  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:58:07.060502  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:58:07.060525  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:58:07.060546  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:58:07.060571  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:58:07.060609  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:07.060634  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.060648  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.060666  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.060725  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:07.077675  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:07.167227  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:58:07.171339  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:58:07.184631  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:58:07.188345  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:58:07.201195  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:58:07.204727  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:58:07.217344  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:58:07.220977  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:58:07.233804  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:58:07.237296  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:58:07.250936  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:58:07.254504  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:58:07.267513  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:58:07.293250  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:58:07.319357  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:58:07.345045  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:58:07.370793  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:58:07.397411  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:58:07.422329  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:58:07.447186  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:58:07.472564  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:58:07.500373  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:58:07.526598  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:58:07.552426  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:58:07.570062  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:58:07.589628  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:58:07.609486  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:58:07.630629  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:58:07.650280  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:58:07.669308  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:58:07.687700  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:58:07.694681  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:58:07.705784  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709662  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709739  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.716649  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:58:07.726290  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:58:07.736118  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740041  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740101  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.747081  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:58:07.757480  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:58:07.767310  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771054  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771114  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.778013  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:58:07.788245  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:58:07.792058  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:58:07.792123  722351 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0916 23:58:07.792232  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:58:07.792263  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:58:07.792307  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:58:07.805180  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:58:07.805247  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:58:07.805296  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:58:07.814610  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:58:07.814678  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:58:07.825352  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:58:07.844047  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:58:07.862757  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:58:07.883848  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:58:07.887562  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:07.899646  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:07.974384  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:08.004718  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:08.005001  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:08.005124  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:58:08.005169  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:08.024622  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:08.169785  722351 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:08.169853  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:58:25.708852  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (17.538975369s)
	I0916 23:58:25.708884  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:58:25.930343  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m03 minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:58:26.006016  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:58:26.089408  722351 start.go:319] duration metric: took 18.084403561s to joinCluster
	I0916 23:58:26.089494  722351 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:26.089805  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:26.091004  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:58:26.092246  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:26.200675  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:26.214424  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:58:26.214506  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:58:26.214713  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	W0916 23:58:28.218137  722351 node_ready.go:57] node "ha-198834-m03" has "Ready":"False" status (will retry)
	I0916 23:58:29.718579  722351 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0916 23:58:29.718621  722351 node_ready.go:38] duration metric: took 3.503891029s for node "ha-198834-m03" to be "Ready" ...
	I0916 23:58:29.718640  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:58:29.718688  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:58:29.730821  722351 api_server.go:72] duration metric: took 3.641289304s to wait for apiserver process to appear ...
	I0916 23:58:29.730847  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:58:29.730870  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:58:29.736447  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:58:29.737363  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:58:29.737382  722351 api_server.go:131] duration metric: took 6.528439ms to wait for apiserver health ...
	I0916 23:58:29.737390  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:58:29.743125  722351 system_pods.go:59] 27 kube-system pods found
	I0916 23:58:29.743154  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.743159  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.743162  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.743166  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.743169  722351 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.743172  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.743179  722351 system_pods.go:61] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743182  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.743189  722351 system_pods.go:61] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743193  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.743198  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.743202  722351 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.743206  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.743209  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.743212  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.743216  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.743220  722351 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743227  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.743231  722351 system_pods.go:61] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743236  722351 system_pods.go:61] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743241  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.743245  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.743248  722351 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.743251  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.743254  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.743257  722351 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.743260  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.743267  722351 system_pods.go:74] duration metric: took 5.871633ms to wait for pod list to return data ...
	I0916 23:58:29.743275  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:58:29.746038  722351 default_sa.go:45] found service account: "default"
	I0916 23:58:29.746059  722351 default_sa.go:55] duration metric: took 2.77496ms for default service account to be created ...
	I0916 23:58:29.746067  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:58:29.751428  722351 system_pods.go:86] 27 kube-system pods found
	I0916 23:58:29.751454  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.751459  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.751463  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.751466  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.751469  722351 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.751472  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.751478  722351 system_pods.go:89] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751482  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.751490  722351 system_pods.go:89] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751494  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.751498  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.751501  722351 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.751504  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.751508  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.751512  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.751515  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.751520  722351 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751526  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.751530  722351 system_pods.go:89] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751535  722351 system_pods.go:89] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751540  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.751545  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.751550  722351 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.751554  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.751558  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.751563  722351 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.751569  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.751577  722351 system_pods.go:126] duration metric: took 5.505301ms to wait for k8s-apps to be running ...
	I0916 23:58:29.751587  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:58:29.751637  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:58:29.764067  722351 system_svc.go:56] duration metric: took 12.467532ms WaitForService to wait for kubelet
	I0916 23:58:29.764102  722351 kubeadm.go:578] duration metric: took 3.674577242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:58:29.764127  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:58:29.767676  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767699  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767712  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767717  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767721  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767724  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767728  722351 node_conditions.go:105] duration metric: took 3.595861ms to run NodePressure ...
	I0916 23:58:29.767739  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:58:29.767761  722351 start.go:255] writing updated cluster config ...
	I0916 23:58:29.768076  722351 ssh_runner.go:195] Run: rm -f paused
	I0916 23:58:29.772054  722351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:29.772528  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:58:29.776391  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781517  722351 pod_ready.go:94] pod "coredns-66bc5c9577-5wx4k" is "Ready"
	I0916 23:58:29.781544  722351 pod_ready.go:86] duration metric: took 5.128752ms for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781552  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.786524  722351 pod_ready.go:94] pod "coredns-66bc5c9577-mjbz6" is "Ready"
	I0916 23:58:29.786549  722351 pod_ready.go:86] duration metric: took 4.991527ms for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.789148  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793593  722351 pod_ready.go:94] pod "etcd-ha-198834" is "Ready"
	I0916 23:58:29.793614  722351 pod_ready.go:86] duration metric: took 4.43654ms for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793622  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797833  722351 pod_ready.go:94] pod "etcd-ha-198834-m02" is "Ready"
	I0916 23:58:29.797856  722351 pod_ready.go:86] duration metric: took 4.228462ms for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797864  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.974055  722351 request.go:683] "Waited before sending request" delay="176.0853ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.173047  722351 request.go:683] "Waited before sending request" delay="193.205885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.373324  722351 request.go:683] "Waited before sending request" delay="74.260595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.573189  722351 request.go:683] "Waited before sending request" delay="196.187075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.973960  722351 request.go:683] "Waited before sending request" delay="171.749825ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.977519  722351 pod_ready.go:94] pod "etcd-ha-198834-m03" is "Ready"
	I0916 23:58:30.977548  722351 pod_ready.go:86] duration metric: took 1.179678858s for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.172996  722351 request.go:683] "Waited before sending request" delay="195.270589ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:58:31.176896  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.373184  722351 request.go:683] "Waited before sending request" delay="196.155083ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834"
	I0916 23:58:31.573091  722351 request.go:683] "Waited before sending request" delay="196.292532ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:31.576254  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834" is "Ready"
	I0916 23:58:31.576280  722351 pod_ready.go:86] duration metric: took 399.33205ms for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.576288  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.773718  722351 request.go:683] "Waited before sending request" delay="197.34633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m02"
	I0916 23:58:31.973716  722351 request.go:683] "Waited before sending request" delay="196.477986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:31.978504  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m02" is "Ready"
	I0916 23:58:31.978555  722351 pod_ready.go:86] duration metric: took 402.258846ms for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.978567  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.172964  722351 request.go:683] "Waited before sending request" delay="194.26238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m03"
	I0916 23:58:32.373491  722351 request.go:683] "Waited before sending request" delay="197.345263ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:32.376525  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m03" is "Ready"
	I0916 23:58:32.376552  722351 pod_ready.go:86] duration metric: took 397.9768ms for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.573017  722351 request.go:683] "Waited before sending request" delay="196.299414ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:58:32.577487  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.773969  722351 request.go:683] "Waited before sending request" delay="196.341624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834"
	I0916 23:58:32.973585  722351 request.go:683] "Waited before sending request" delay="196.346276ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:32.977689  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834" is "Ready"
	I0916 23:58:32.977721  722351 pod_ready.go:86] duration metric: took 400.206125ms for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.977735  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.173032  722351 request.go:683] "Waited before sending request" delay="195.180271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m02"
	I0916 23:58:33.373811  722351 request.go:683] "Waited before sending request" delay="197.350717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:33.376722  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m02" is "Ready"
	I0916 23:58:33.376747  722351 pod_ready.go:86] duration metric: took 399.004052ms for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.376756  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.573048  722351 request.go:683] "Waited before sending request" delay="196.186349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m03"
	I0916 23:58:33.773733  722351 request.go:683] "Waited before sending request" delay="197.347012ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:33.776944  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m03" is "Ready"
	I0916 23:58:33.776972  722351 pod_ready.go:86] duration metric: took 400.209131ms for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.973425  722351 request.go:683] "Waited before sending request" delay="196.344301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:58:33.977203  722351 pod_ready.go:83] waiting for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.173688  722351 request.go:683] "Waited before sending request" delay="196.345801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tkhn"
	I0916 23:58:34.373026  722351 request.go:683] "Waited before sending request" delay="196.256084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:34.376079  722351 pod_ready.go:94] pod "kube-proxy-5tkhn" is "Ready"
	I0916 23:58:34.376106  722351 pod_ready.go:86] duration metric: took 398.875647ms for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.376114  722351 pod_ready.go:83] waiting for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.573402  722351 request.go:683] "Waited before sending request" delay="197.174223ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:34.773022  722351 request.go:683] "Waited before sending request" delay="196.289258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:34.973958  722351 request.go:683] "Waited before sending request" delay="97.260541ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:35.173637  722351 request.go:683] "Waited before sending request" delay="196.407064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.573487  722351 request.go:683] "Waited before sending request" delay="193.254271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.973307  722351 request.go:683] "Waited before sending request" delay="93.259111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	W0916 23:58:36.383328  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:38.882062  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:40.882520  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:42.883194  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:45.382843  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:47.882744  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:49.882993  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:51.883265  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:54.383005  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:56.882555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:59.382463  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:01.382897  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:03.883583  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:06.382581  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:08.882275  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:11.382224  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:13.382333  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:15.882727  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:18.383800  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:20.882547  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:22.883081  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:25.383627  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:27.882377  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:29.882787  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:31.884042  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:34.382932  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:36.882730  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:38.882959  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:40.883411  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:43.382771  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:45.882938  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:48.381607  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:50.382229  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:52.382889  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:54.882546  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:56.882802  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:58.882939  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:00.883550  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:03.382872  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:05.383021  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:07.384166  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:09.883064  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:11.884141  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:14.383248  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:16.883441  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:18.884438  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:21.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:23.883713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:26.383093  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:28.883552  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:31.383392  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:33.883626  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:35.883823  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:38.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:40.883430  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:43.383026  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:45.883091  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:48.382865  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:50.882713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:52.882989  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:55.383076  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:57.383555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:59.882704  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:01.883495  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:04.382406  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:06.383424  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:08.883456  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:11.382988  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:13.882379  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:15.883651  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:18.382551  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:20.382997  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:22.882943  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:24.883256  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:27.383660  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:29.882955  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:32.383364  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	I0917 00:01:34.382530  722351 pod_ready.go:94] pod "kube-proxy-d8brp" is "Ready"
	I0917 00:01:34.382562  722351 pod_ready.go:86] duration metric: took 3m0.006439942s for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.382572  722351 pod_ready.go:83] waiting for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.387645  722351 pod_ready.go:94] pod "kube-proxy-h2fxd" is "Ready"
	I0917 00:01:34.387677  722351 pod_ready.go:86] duration metric: took 5.098826ms for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.390707  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396086  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834" is "Ready"
	I0917 00:01:34.396115  722351 pod_ready.go:86] duration metric: took 5.379692ms for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396126  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400646  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m02" is "Ready"
	I0917 00:01:34.400670  722351 pod_ready.go:86] duration metric: took 4.536355ms for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400680  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.577209  722351 request.go:683] "Waited before sending request" delay="174.117357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0917 00:01:34.580767  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m03" is "Ready"
	I0917 00:01:34.580796  722351 pod_ready.go:86] duration metric: took 180.109317ms for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.580808  722351 pod_ready.go:40] duration metric: took 3m4.808720134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:34.629691  722351 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:01:34.631405  722351 out.go:179] * Done! kubectl is now configured to use "ha-198834" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50aecbe9f874a63c5159d55af06211bca7903e623f01f1e603f267caaf6da9a7/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.259744438Z" level=info msg="ignoring event" container=fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.275867775Z" level=info msg="ignoring event" container=64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.320870537Z" level=info msg="ignoring event" container=310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.336829292Z" level=info msg="ignoring event" container=a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687384709Z" level=info msg="ignoring event" container=11889e34950f849cf7805c6d56f1957ad9d5af727f4810f2da728671398b9f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687719889Z" level=info msg="ignoring event" container=1ccdf9f33d5601763297f230a2f6e51620db2ed183e9f4b9179f4ccef579dfac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756623723Z" level=info msg="ignoring event" container=bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756673284Z" level=info msg="ignoring event" container=870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:01:36 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:01:37 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:37Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	1ccdf9f33d560       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   bf6d6b59f2413       coredns-66bc5c9577-mjbz6
	11889e34950f8       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   870758f308362       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              6 minutes ago       Running             kindnet-cni               0                   f541f878be896       kindnet-h28vp
	b16ddbbc469c5       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   50aecbe9f874a       storage-provisioner
	2da683f529549       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	8a32665f7e3e4       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     6 minutes ago       Running             kube-vip                  0                   5e4aed7a38e18       kube-vip-ha-198834
	4f536df8f44eb       a0af72f2ec6d6                                                                                         6 minutes ago       Running             kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         6 minutes ago       Running             kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         6 minutes ago       Running             kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [11889e34950f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50107 - 45856 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000165011s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50484 - 7509 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000096464s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [1ccdf9f33d56] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49262 - 38359 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000112146s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:51442 - 41164 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000125545s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	
	
	==> coredns [f4f7ea59034e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3525bf030f0d49c1ab057441433c477c
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m29s
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m29s
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m35s
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m29s
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m27s  kube-proxy       
	  Normal  Starting                 6m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m1s   node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m30s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 35caf7934a824e33949ce426f7316bfd
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m57s
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m53s  kube-proxy       
	  Normal  RegisteredNode  5m56s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  5m55s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  5m30s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	Name:               ha-198834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-198834-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4e7dc065e4fa49595825994457b8e
	  System UUID:                6f810798-3461-44d1-91c3-d55b483ec842
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l2jn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 etcd-ha-198834-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m24s
	  kube-system                 kindnet-67fn9                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m19s
	  kube-system                 kube-apiserver-ha-198834-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-controller-manager-ha-198834-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-proxy-d8brp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-198834-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-vip-ha-198834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  5m26s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  5m25s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  5m25s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"info","ts":"2025-09-16T23:58:12.699050Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:58:12.699094Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.699108Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702028Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:58:12.702080Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702094Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.733438Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.736369Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-16T23:58:12.759123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:34222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:58:12.760774Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(5981864578030751937 12593026477526642892 12956928539845794953)"}
	{"level":"info","ts":"2025-09-16T23:58:12.760967Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.761007Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:19.991223Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:25.496900Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:30.072550Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:32.068856Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:40.123997Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:42.678047Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","bytes":1393601,"size":"1.4 MB","took":"30.013494343s"}
	{"level":"info","ts":"2025-09-17T00:03:27.515545Z","caller":"traceutil/trace.go:172","msg":"trace[429348455] transaction","detail":"{read_only:false; response_revision:1816; number_of_response:1; }","duration":"111.335739ms","start":"2025-09-17T00:03:27.404190Z","end":"2025-09-17T00:03:27.515525Z","steps":["trace[429348455] 'process raft request'  (duration: 111.14691ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:03:45.321237Z","caller":"traceutil/trace.go:172","msg":"trace[1168397664] transaction","detail":"{read_only:false; response_revision:1860; number_of_response:1; }","duration":"125.134331ms","start":"2025-09-17T00:03:45.196084Z","end":"2025-09-17T00:03:45.321218Z","steps":["trace[1168397664] 'process raft request'  (duration: 124.989711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:03:45.959335Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.771431ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040017681051689 > lease_revoke:<id:50c19954f670abb9>","response":"size:29"}
	{"level":"info","ts":"2025-09-17T00:03:45.960220Z","caller":"traceutil/trace.go:172","msg":"trace[1051336348] linearizableReadLoop","detail":"{readStateIndex:2294; appliedIndex:2293; }","duration":"253.51671ms","start":"2025-09-17T00:03:45.706683Z","end":"2025-09-17T00:03:45.960199Z","steps":["trace[1051336348] 'read index received'  (duration: 352.53µs)","trace[1051336348] 'applied index is now lower than readState.Index'  (duration: 253.162091ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:03:45.960342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"293.914233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:03:45.960374Z","caller":"traceutil/trace.go:172","msg":"trace[305973442] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:1862; }","duration":"293.967568ms","start":"2025-09-17T00:03:45.666397Z","end":"2025-09-17T00:03:45.960365Z","steps":["trace[305973442] 'agreement among raft nodes before linearized reading'  (duration: 293.876046ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:03:45.960547Z","caller":"traceutil/trace.go:172","msg":"trace[2000303218] transaction","detail":"{read_only:false; response_revision:1863; number_of_response:1; }","duration":"248.094618ms","start":"2025-09-17T00:03:45.712439Z","end":"2025-09-17T00:03:45.960534Z","steps":["trace[2000303218] 'process raft request'  (duration: 247.028417ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:03:53 up  2:46,  0 users,  load average: 2.03, 1.40, 1.13
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:03:10.423559       1 main.go:301] handling current node
	I0917 00:03:20.423023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:20.423063       1 main.go:301] handling current node
	I0917 00:03:20.423080       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:20.423085       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:20.423378       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:20.423393       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:30.423984       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:30.424027       1 main.go:301] handling current node
	I0917 00:03:30.424048       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:30.424055       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:30.424343       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:30.424355       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:40.423382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:40.423419       1 main.go:301] handling current node
	I0917 00:03:40.423434       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:40.423439       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:40.423677       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:40.423692       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:50.420798       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:50.420829       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:50.421086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:50.421118       1 main.go:301] handling current node
	I0917 00:03:50.421132       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:50.421136       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0916 23:57:24.194840       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.200277       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.242655       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0916 23:58:29.048843       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:34.361323       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:36.632983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:02.667929       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:58.976838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:19.218755       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:15.644338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:43.338268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:03:18.851078       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58262: use of closed network connection
	E0917 00:03:19.024113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58282: use of closed network connection
	E0917 00:03:19.194951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58306: use of closed network connection
	E0917 00:03:19.388722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58332: use of closed network connection
	E0917 00:03:19.557698       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58342: use of closed network connection
	E0917 00:03:19.744687       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58348: use of closed network connection
	E0917 00:03:19.919836       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58362: use of closed network connection
	E0917 00:03:20.087518       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58376: use of closed network connection
	E0917 00:03:20.254024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58398: use of closed network connection
	E0917 00:03:22.459781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48968: use of closed network connection
	E0917 00:03:22.632160       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48992: use of closed network connection
	E0917 00:03:22.799975       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:49024: use of closed network connection
	I0917 00:03:39.352525       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:47.239226       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.036759       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.036813       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5897933c-61bc-4eef-8922-66c37ba68c57(kube-system/kindnet-rwc59) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	E0916 23:58:30.036834       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	I0916 23:58:30.038109       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.048424       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:30.048665       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4edbf3a1-360c-4f5c-81a3-aa63deb9a159(kube-system/kindnet-lpn5v) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	
	
	==> kubelet <==
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349086    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51d39f-7e43-461b-a021-13ddf0cb9845-lib-modules\") pod \"kindnet-h28vp\" (UID: \"6c51d39f-7e43-461b-a021-13ddf0cb9845\") " pod="kube-system/kindnet-h28vp"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349103    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-xtables-lock\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349123    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n49\" (UniqueName: \"kubernetes.io/projected/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-kube-api-access-84n49\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650251    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-config-volume\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650425    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5ns\" (UniqueName: \"kubernetes.io/projected/c918625f-be11-44bf-8b82-d4c21b8993d1-kube-api-access-th5ns\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650660    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c918625f-be11-44bf-8b82-d4c21b8993d1-config-volume\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650701    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmb4\" (UniqueName: \"kubernetes.io/projected/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-kube-api-access-xhmb4\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.014693    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tkhn" podStartSLOduration=1.014665687 podStartE2EDuration="1.014665687s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:24.932304069 +0000 UTC m=+6.176281069" watchObservedRunningTime="2025-09-16 23:57:25.014665687 +0000 UTC m=+6.258642688"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.042478    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.046332    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f541f878be89694936d8219d8e7fc682a8a169d9edf6417f067927aa4748c0ae"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153403    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrvp\" (UniqueName: \"kubernetes.io/projected/6b6f64f3-2647-4e13-be41-47fcc6111f3e-kube-api-access-jqrvp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153458    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b6f64f3-2647-4e13-be41-47fcc6111f3e-tmp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098005    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5wx4k" podStartSLOduration=2.097979793 podStartE2EDuration="2.097979793s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.086842117 +0000 UTC m=+7.330819118" watchObservedRunningTime="2025-09-16 23:57:26.097979793 +0000 UTC m=+7.341956793"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098130    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098124108 podStartE2EDuration="1.098124108s" podCreationTimestamp="2025-09-16 23:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.097817254 +0000 UTC m=+7.341794256" watchObservedRunningTime="2025-09-16 23:57:26.098124108 +0000 UTC m=+7.342101108"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.159968    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mjbz6" podStartSLOduration=5.159946005 podStartE2EDuration="5.159946005s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.124330373 +0000 UTC m=+7.368307374" watchObservedRunningTime="2025-09-16 23:57:29.159946005 +0000 UTC m=+10.403923006"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.193262    2468 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.194144    2468 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 23:57:30 ha-198834 kubelet[2468]: I0916 23:57:30.158085    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h28vp" podStartSLOduration=1.342825895 podStartE2EDuration="6.158061718s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="2025-09-16 23:57:24.955662014 +0000 UTC m=+6.199639012" lastFinishedPulling="2025-09-16 23:57:29.770897851 +0000 UTC m=+11.014874835" observedRunningTime="2025-09-16 23:57:30.157595407 +0000 UTC m=+11.401572408" watchObservedRunningTime="2025-09-16 23:57:30.158061718 +0000 UTC m=+11.402038720"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.230434    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.258365    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370599    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370662    2468 scope.go:117] "RemoveContainer" containerID="fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.388953    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.389033    2468 scope.go:117] "RemoveContainer" containerID="64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea"
	Sep 17 00:01:35 ha-198834 kubelet[2468]: I0917 00:01:35.703764    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5r6\" (UniqueName: \"kubernetes.io/projected/a7cf1231-2a12-4247-a01a-2c2f02f5f2d8-kube-api-access-vt5r6\") pod \"busybox-7b57f96db7-pstjp\" (UID: \"a7cf1231-2a12-4247-a01a-2c2f02f5f2d8\") " pod="default/busybox-7b57f96db7-pstjp"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (29.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --output json --alsologtostderr -v 5: exit status 7 (717.196223ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-198834","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-198834-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-198834-m03","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-198834-m04","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:03:54.716625  745024 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:03:54.716780  745024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:03:54.716789  745024 out.go:374] Setting ErrFile to fd 2...
	I0917 00:03:54.716794  745024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:03:54.717053  745024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:03:54.717270  745024 out.go:368] Setting JSON to true
	I0917 00:03:54.717293  745024 mustload.go:65] Loading cluster: ha-198834
	I0917 00:03:54.717433  745024 notify.go:220] Checking for updates...
	I0917 00:03:54.717770  745024 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:03:54.717804  745024 status.go:174] checking status of ha-198834 ...
	I0917 00:03:54.718291  745024 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:03:54.739424  745024 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:03:54.739456  745024 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:03:54.739842  745024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:03:54.758540  745024 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:03:54.758780  745024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:03:54.758817  745024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:03:54.776743  745024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:03:54.874551  745024 ssh_runner.go:195] Run: systemctl --version
	I0917 00:03:54.879186  745024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:03:54.891063  745024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:03:54.945603  745024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:03:54.93472334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:03:54.946284  745024 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:03:54.946322  745024 api_server.go:166] Checking apiserver status ...
	I0917 00:03:54.946366  745024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:03:54.959038  745024 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:03:54.969189  745024 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:03:54.969248  745024 ssh_runner.go:195] Run: ls
	I0917 00:03:54.973071  745024 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:03:54.977245  745024 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:03:54.977271  745024 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:03:54.977281  745024 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:03:54.977297  745024 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:03:54.977533  745024 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:03:54.995889  745024 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:03:54.995956  745024 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:03:54.996197  745024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:03:55.013877  745024 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:03:55.014161  745024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:03:55.014215  745024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:03:55.032158  745024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:03:55.126317  745024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:03:55.140657  745024 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:03:55.140695  745024 api_server.go:166] Checking apiserver status ...
	I0917 00:03:55.140741  745024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:03:55.153841  745024 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2177/cgroup
	W0917 00:03:55.164000  745024 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:03:55.164064  745024 ssh_runner.go:195] Run: ls
	I0917 00:03:55.167732  745024 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:03:55.172048  745024 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:03:55.172072  745024 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:03:55.172081  745024 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:03:55.172101  745024 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:03:55.172355  745024 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:03:55.189984  745024 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:03:55.190011  745024 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:03:55.190271  745024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:03:55.207891  745024 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:03:55.208215  745024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:03:55.208260  745024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:03:55.225622  745024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:03:55.318527  745024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:03:55.331986  745024 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:03:55.332017  745024 api_server.go:166] Checking apiserver status ...
	I0917 00:03:55.332062  745024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:03:55.343742  745024 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:03:55.354210  745024 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:03:55.354273  745024 ssh_runner.go:195] Run: ls
	I0917 00:03:55.357874  745024 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:03:55.364085  745024 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:03:55.364117  745024 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:03:55.364130  745024 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:03:55.364152  745024 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:03:55.364399  745024 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:03:55.382757  745024 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:03:55.382784  745024 status.go:384] host is not running, skipping remaining checks
	I0917 00:03:55.382793  745024 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp testdata/cp-test.txt ha-198834:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834_ha-198834-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test_ha-198834_ha-198834-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834_ha-198834-m03.txt
E0917 00:03:57.528772  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test_ha-198834_ha-198834-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834_ha-198834-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp ha-198834:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834_ha-198834-m04.txt: exit status 1 (140.733725ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp ha-198834:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834_ha-198834-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test_ha-198834_ha-198834-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test_ha-198834_ha-198834-m04.txt": exit status 1 (142.471587ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test_ha-198834_ha-198834-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp testdata/cp-test.txt ha-198834-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m02:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m02_ha-198834.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test_ha-198834-m02_ha-198834.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m02:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m02_ha-198834-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test_ha-198834-m02_ha-198834-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m02:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m02_ha-198834-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m02:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m02_ha-198834-m04.txt: exit status 1 (144.253266ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m02:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m02_ha-198834-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test_ha-198834-m02_ha-198834-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test_ha-198834-m02_ha-198834-m04.txt": exit status 1 (146.108141ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test_ha-198834-m02_ha-198834-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp testdata/cp-test.txt ha-198834-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m03_ha-198834.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt: exit status 1 (144.139885ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt": exit status 1 (138.341442ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt: exit status 1 (143.461884ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (139.586484ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt: exit status 1 (139.977364ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (140.331966ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:545: failed to read test file 'testdata/cp-test.txt' : open /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt: no such file or directory
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt: exit status 1 (160.461796ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (140.323745ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 "sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt": exit status 1 (255.054754ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-198834-m04_ha-198834.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834 \"sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-198834-m04_ha-198834.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt: exit status 1 (161.176089ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (148.319088ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 "sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt": exit status 1 (260.387932ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m02 \"sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt: exit status 1 (159.174653ms)

                                                
                                                
** stderr ** 
	getting host: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (143.495212ms)

                                                
                                                
** stderr ** 
	ssh: "ha-198834-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 "sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt": exit status 1 (259.549712ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-198834 ssh -n ha-198834-m03 \"sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt: No such file or directory\r\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:57:02.530585618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6698b0ad85a9078b37114c4e66646c6dc7a67a706d28557d80b29fea1d15d512",
	            "SandboxKey": "/var/run/docker/netns/6698b0ad85a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:eb:f5:3a:ee:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "669cb4f772890bad35a4ad4cdb1934f42912d7e03fc353fd08c3e3a046cfba54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.026365187s)
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m03.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m03_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt                                                           │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:58.042095  722351 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:58.042245  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042257  722351 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:58.042263  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042455  722351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:58.043028  722351 out.go:368] Setting JSON to false
	I0916 23:56:58.043951  722351 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9550,"bootTime":1758057468,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:58.044043  722351 start.go:140] virtualization: kvm guest
	I0916 23:56:58.045935  722351 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:58.047229  722351 notify.go:220] Checking for updates...
	I0916 23:56:58.047241  722351 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:58.048693  722351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:58.049858  722351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:58.051172  722351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:58.052335  722351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:58.053390  722351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:58.054603  722351 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:58.077260  722351 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:58.077444  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.132853  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.122248025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.132998  722351 docker.go:318] overlay module found
	I0916 23:56:58.135611  722351 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:58.136750  722351 start.go:304] selected driver: docker
	I0916 23:56:58.136770  722351 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:58.136782  722351 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:58.137364  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.190249  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.179811473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.190455  722351 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:58.190736  722351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:58.192641  722351 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:58.193978  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:56:58.194069  722351 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:58.194094  722351 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:58.194188  722351 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:58.195605  722351 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0916 23:56:58.196688  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:56:58.197669  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:58.198952  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.199018  722351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0916 23:56:58.199034  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:58.199064  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:58.199149  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:58.199167  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:56:58.199618  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:56:58.199650  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json: {Name:mkfd30616e0167206552e80675557cfcc4fee172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:58.218451  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:58.218470  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:58.218487  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:58.218525  722351 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:58.218643  722351 start.go:364] duration metric: took 94.227µs to acquireMachinesLock for "ha-198834"
	I0916 23:56:58.218683  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:56:58.218779  722351 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:58.220943  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:58.221292  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:56:58.221335  722351 client.go:168] LocalClient.Create starting
	I0916 23:56:58.221405  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:56:58.221441  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221461  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221543  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:56:58.221570  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221588  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221956  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:58.238665  722351 cli_runner.go:211] docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:58.238743  722351 network_create.go:284] running [docker network inspect ha-198834] to gather additional debugging logs...
	I0916 23:56:58.238769  722351 cli_runner.go:164] Run: docker network inspect ha-198834
	W0916 23:56:58.254999  722351 cli_runner.go:211] docker network inspect ha-198834 returned with exit code 1
	I0916 23:56:58.255086  722351 network_create.go:287] error running [docker network inspect ha-198834]: docker network inspect ha-198834: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834 not found
	I0916 23:56:58.255122  722351 network_create.go:289] output of [docker network inspect ha-198834]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834 not found
	
	** /stderr **
	I0916 23:56:58.255285  722351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:58.272422  722351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b56820}
	I0916 23:56:58.272473  722351 network_create.go:124] attempt to create docker network ha-198834 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:58.272524  722351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-198834 ha-198834
	I0916 23:56:58.332062  722351 network_create.go:108] docker network ha-198834 192.168.49.0/24 created
	I0916 23:56:58.332109  722351 kic.go:121] calculated static IP "192.168.49.2" for the "ha-198834" container
	I0916 23:56:58.332180  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:58.347722  722351 cli_runner.go:164] Run: docker volume create ha-198834 --label name.minikube.sigs.k8s.io=ha-198834 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:58.365722  722351 oci.go:103] Successfully created a docker volume ha-198834
	I0916 23:56:58.365811  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --entrypoint /usr/bin/test -v ha-198834:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:58.752716  722351 oci.go:107] Successfully prepared a docker volume ha-198834
	I0916 23:56:58.752766  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.752791  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:58.752860  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:02.431811  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.678879308s)
	I0916 23:57:02.431852  722351 kic.go:203] duration metric: took 3.679056906s to extract preloaded images to volume ...
	W0916 23:57:02.431981  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:02.432030  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:02.432094  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:02.483868  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834 --name ha-198834 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834 --network ha-198834 --ip 192.168.49.2 --volume ha-198834:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:02.749244  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Running}}
	I0916 23:57:02.769059  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:02.787342  722351 cli_runner.go:164] Run: docker exec ha-198834 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:02.836161  722351 oci.go:144] the created container "ha-198834" has a running status.
	I0916 23:57:02.836195  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa...
	I0916 23:57:03.023198  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:03.023332  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:03.051071  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.071057  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:03.071081  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:03.121506  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.138447  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:03.138553  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.156407  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.156657  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.156674  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:03.295893  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.295938  722351 ubuntu.go:182] provisioning hostname "ha-198834"
	I0916 23:57:03.296023  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.314748  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.314993  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.315008  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0916 23:57:03.463642  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.463716  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.480946  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.481224  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.481264  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:03.616528  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:03.616561  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:03.616587  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:03.616603  722351 provision.go:84] configureAuth start
	I0916 23:57:03.616666  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:03.633505  722351 provision.go:143] copyHostCerts
	I0916 23:57:03.633553  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633590  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:03.633601  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633689  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:03.633796  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633824  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:03.633834  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633870  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:03.633969  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.633996  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:03.634007  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.634050  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:03.634188  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0916 23:57:03.786555  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:03.786617  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:03.786691  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.804115  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:03.900955  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:03.901014  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:57:03.928655  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:03.928721  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:03.953468  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:03.953537  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:03.978330  722351 provision.go:87] duration metric: took 361.708211ms to configureAuth
	I0916 23:57:03.978356  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:03.978536  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:03.978599  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.995700  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.995934  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.995954  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:04.131514  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:04.131541  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:04.131675  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:04.131752  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.148752  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.148996  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.149060  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:04.298185  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:04.298270  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.315091  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.315309  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.315326  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:05.420254  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:04.295122578 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:05.420296  722351 machine.go:96] duration metric: took 2.281822221s to provisionDockerMachine
	I0916 23:57:05.420315  722351 client.go:171] duration metric: took 7.198967751s to LocalClient.Create
	I0916 23:57:05.420340  722351 start.go:167] duration metric: took 7.199048943s to libmachine.API.Create "ha-198834"
	I0916 23:57:05.420350  722351 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0916 23:57:05.420364  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:05.420443  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:05.420495  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.437726  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.536164  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:05.539580  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:05.539616  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:05.539633  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:05.539639  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:05.539653  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:05.539713  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:05.539819  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:05.539836  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:05.540001  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:05.548691  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:05.575226  722351 start.go:296] duration metric: took 154.859714ms for postStartSetup
	I0916 23:57:05.575586  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.591876  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:05.592351  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:05.592412  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.609076  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.701881  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:05.706378  722351 start.go:128] duration metric: took 7.487581015s to createHost
	I0916 23:57:05.706400  722351 start.go:83] releasing machines lock for "ha-198834", held for 7.487744986s
	I0916 23:57:05.706457  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.723047  722351 ssh_runner.go:195] Run: cat /version.json
	I0916 23:57:05.723106  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.723117  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:05.723202  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.739830  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.739978  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.900291  722351 ssh_runner.go:195] Run: systemctl --version
	I0916 23:57:05.905029  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:05.909440  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:05.939050  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:05.939153  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:05.968631  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:05.968659  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:05.968693  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:05.968830  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:05.985490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:05.997349  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:06.007949  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:06.008036  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:06.018490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.028804  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:06.039330  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.049816  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:06.059493  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:06.069825  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:06.080461  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:06.091039  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:06.100019  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:06.109126  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.178675  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:06.251706  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:06.251760  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:06.251809  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:06.264383  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.275792  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:06.294666  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.306227  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:06.317564  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:06.334759  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:06.338327  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:06.348543  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:06.366680  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:06.432452  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:06.496386  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:06.496496  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:06.515617  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:06.527317  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.590441  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:07.360810  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:07.372759  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:07.384493  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.396808  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:07.466973  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:07.538629  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.607976  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:07.630119  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:07.642121  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.709050  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:07.784177  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.797686  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:07.797763  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:07.801576  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:07.801630  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:07.804977  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:07.837851  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:07.837957  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.862098  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.888678  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:07.888755  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:07.905526  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:07.909605  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:07.921677  722351 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:57:07.921793  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:07.921842  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.943020  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.943041  722351 docker.go:621] Images already preloaded, skipping extraction
	I0916 23:57:07.943097  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.963583  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.963609  722351 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:57:07.963623  722351 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0916 23:57:07.963750  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:07.963822  722351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 23:57:08.012977  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:08.013007  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:08.013021  722351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:57:08.013044  722351 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:57:08.013180  722351 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:57:08.013203  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:08.013244  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:08.026529  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:08.026652  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:08.026716  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:08.036301  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:08.036379  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:57:08.046128  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 23:57:08.064738  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:08.083216  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:57:08.101114  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:57:08.121332  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:08.125035  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:08.136734  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:08.207460  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:08.231438  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0916 23:57:08.231468  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:08.231491  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.231634  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:08.231682  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:08.231692  722351 certs.go:256] generating profile certs ...
	I0916 23:57:08.231748  722351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:08.231761  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt with IP's: []
	I0916 23:57:08.595971  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt ...
	I0916 23:57:08.596008  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt: {Name:mk045c8005e18afdd173496398fb640e85421530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596237  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key ...
	I0916 23:57:08.596255  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key: {Name:mkec7f349d5172bad8ab50dce27926cf4a2810b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596372  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28
	I0916 23:57:08.596390  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:57:08.930707  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 ...
	I0916 23:57:08.930740  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28: {Name:mke8743bf1c0faa0b20cb0336c0e1879fcb77e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.930956  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 ...
	I0916 23:57:08.930975  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28: {Name:mkd63d446f2fe51bc154cd1e5df7f39c484f911b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.931094  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:08.931221  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:08.931283  722351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:08.931298  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt with IP's: []
	I0916 23:57:09.286083  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt ...
	I0916 23:57:09.286118  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt: {Name:mk7d8f9e6931aff0b35e5110e6bb582a3f00c824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286322  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key ...
	I0916 23:57:09.286339  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key: {Name:mkaeef389ff7f9a0b6729cce56a45b0b3aa13296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286448  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:09.286467  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:09.286479  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:09.286489  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:09.286513  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:09.286527  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:09.286538  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:09.286550  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:09.286602  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:09.286641  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:09.286650  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:09.286674  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:09.286702  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:09.286730  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:09.286767  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:09.286792  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.286805  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.286817  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.287381  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:09.312982  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:09.337940  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:09.362347  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:09.386557  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:57:09.412140  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:09.436893  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:09.461871  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:09.487876  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:09.516060  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:09.541440  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:09.567069  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:57:09.585649  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:09.591504  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:09.602004  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605727  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605791  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.612679  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:09.622556  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:09.632414  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636379  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636441  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.643659  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:09.653893  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:09.663837  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667554  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667899  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.675833  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:09.686032  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:09.689851  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:09.689923  722351 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:09.690062  722351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 23:57:09.708774  722351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:57:09.718368  722351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:57:09.727825  722351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:57:09.727888  722351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:57:09.738106  722351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:57:09.738126  722351 kubeadm.go:157] found existing configuration files:
	
	I0916 23:57:09.738165  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:57:09.747962  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:57:09.748017  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:57:09.757385  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:57:09.766772  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:57:09.766839  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:57:09.775735  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.784848  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:57:09.784955  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.793751  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:57:09.803170  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:57:09.803229  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:57:09.811944  722351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:57:09.867145  722351 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:57:09.919246  722351 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:57:19.614241  722351 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:57:19.614308  722351 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:57:19.614466  722351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:57:19.614561  722351 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:57:19.614607  722351 kubeadm.go:310] OS: Linux
	I0916 23:57:19.614692  722351 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:57:19.614771  722351 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:57:19.614837  722351 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:57:19.614899  722351 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:57:19.614977  722351 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:57:19.615057  722351 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:57:19.615125  722351 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:57:19.615202  722351 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:57:19.615307  722351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:57:19.615454  722351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:57:19.615594  722351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:57:19.615688  722351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:57:19.618162  722351 out.go:252]   - Generating certificates and keys ...
	I0916 23:57:19.618260  722351 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:57:19.618349  722351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:57:19.618445  722351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:57:19.618533  722351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:57:19.618635  722351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:57:19.618717  722351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:57:19.618792  722351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:57:19.618993  722351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619071  722351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:57:19.619249  722351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619335  722351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:57:19.619434  722351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:57:19.619517  722351 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:57:19.619599  722351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:57:19.619679  722351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:57:19.619763  722351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:57:19.619846  722351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:57:19.619990  722351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:57:19.620069  722351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:57:19.620183  722351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:57:19.620281  722351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:57:19.621487  722351 out.go:252]   - Booting up control plane ...
	I0916 23:57:19.621595  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:57:19.621704  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:57:19.621799  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:57:19.621956  722351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:57:19.622047  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:57:19.622137  722351 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:57:19.622213  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:57:19.622246  722351 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:57:19.622371  722351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:57:19.622503  722351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:57:19.622564  722351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000941296s
	I0916 23:57:19.622663  722351 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:57:19.622778  722351 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:57:19.622893  722351 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:57:19.623021  722351 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:57:19.623126  722351 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.545161134s
	I0916 23:57:19.623210  722351 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.1638517s
	I0916 23:57:19.623273  722351 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001738286s
	I0916 23:57:19.623369  722351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:57:19.623478  722351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:57:19.623551  722351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:57:19.623792  722351 kubeadm.go:310] [mark-control-plane] Marking the node ha-198834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:57:19.623845  722351 kubeadm.go:310] [bootstrap-token] Using token: wg2on6.splp3qzu9xv61vdp
	I0916 23:57:19.625599  722351 out.go:252]   - Configuring RBAC rules ...
	I0916 23:57:19.625697  722351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:57:19.625769  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:57:19.625966  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:57:19.626123  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:57:19.626261  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:57:19.626367  722351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:57:19.626473  722351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:57:19.626522  722351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:57:19.626564  722351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:57:19.626570  722351 kubeadm.go:310] 
	I0916 23:57:19.626631  722351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:57:19.626643  722351 kubeadm.go:310] 
	I0916 23:57:19.626737  722351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:57:19.626747  722351 kubeadm.go:310] 
	I0916 23:57:19.626781  722351 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:57:19.626863  722351 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:57:19.626960  722351 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:57:19.626973  722351 kubeadm.go:310] 
	I0916 23:57:19.627050  722351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:57:19.627058  722351 kubeadm.go:310] 
	I0916 23:57:19.627113  722351 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:57:19.627119  722351 kubeadm.go:310] 
	I0916 23:57:19.627167  722351 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:57:19.627238  722351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:57:19.627297  722351 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:57:19.627302  722351 kubeadm.go:310] 
	I0916 23:57:19.627381  722351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:57:19.627449  722351 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:57:19.627454  722351 kubeadm.go:310] 
	I0916 23:57:19.627525  722351 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627618  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0916 23:57:19.627647  722351 kubeadm.go:310] 	--control-plane 
	I0916 23:57:19.627653  722351 kubeadm.go:310] 
	I0916 23:57:19.627725  722351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:57:19.627733  722351 kubeadm.go:310] 
	I0916 23:57:19.627801  722351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627921  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0916 23:57:19.627933  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:19.627939  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:19.630017  722351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:57:19.631017  722351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:57:19.635194  722351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:57:19.635211  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:57:19.655634  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:57:19.855102  722351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:57:19.855186  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:19.855265  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834 minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=true
	I0916 23:57:19.863538  722351 ops.go:34] apiserver oom_adj: -16
	I0916 23:57:19.931275  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.432025  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.932100  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.432105  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.932376  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.432213  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.931583  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.431392  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.932193  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.431927  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.504799  722351 kubeadm.go:1105] duration metric: took 4.649687278s to wait for elevateKubeSystemPrivileges
	I0916 23:57:24.504835  722351 kubeadm.go:394] duration metric: took 14.81493092s to StartCluster
	I0916 23:57:24.504858  722351 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.504967  722351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:57:24.505808  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.506080  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:57:24.506079  722351 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:24.506102  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.506120  722351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:57:24.506215  722351 addons.go:69] Setting storage-provisioner=true in profile "ha-198834"
	I0916 23:57:24.506241  722351 addons.go:238] Setting addon storage-provisioner=true in "ha-198834"
	I0916 23:57:24.506236  722351 addons.go:69] Setting default-storageclass=true in profile "ha-198834"
	I0916 23:57:24.506263  722351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198834"
	I0916 23:57:24.506271  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.506311  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:24.506630  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.506797  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.527476  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:24.528010  722351 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:57:24.528028  722351 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:57:24.528032  722351 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:57:24.528036  722351 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:57:24.528039  722351 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:57:24.528105  722351 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:57:24.528384  722351 addons.go:238] Setting addon default-storageclass=true in "ha-198834"
	I0916 23:57:24.528420  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.528683  722351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:57:24.528891  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.530050  722351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.530067  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:57:24.530109  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.548463  722351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.548490  722351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:57:24.548552  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.551711  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.575963  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.622716  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:57:24.680948  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.725959  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.815565  722351 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:57:25.027949  722351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:57:25.029176  722351 addons.go:514] duration metric: took 523.059617ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:57:25.029216  722351 start.go:246] waiting for cluster config update ...
	I0916 23:57:25.029233  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:25.030834  722351 out.go:203] 
	I0916 23:57:25.032180  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:25.032246  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.033846  722351 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0916 23:57:25.035651  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:25.036699  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:25.038502  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.038524  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:25.038599  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:25.038624  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:25.038635  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:25.038696  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.064556  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:25.064575  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:25.064593  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:25.064625  722351 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:25.064737  722351 start.go:364] duration metric: took 87.928µs to acquireMachinesLock for "ha-198834-m02"
	I0916 23:57:25.064767  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:25.064852  722351 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:57:25.067030  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:25.067261  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:25.067302  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:25.067392  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:25.067435  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067451  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067520  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:25.067544  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067561  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067817  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:25.087287  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0008ae780 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:25.087329  722351 kic.go:121] calculated static IP "192.168.49.3" for the "ha-198834-m02" container
	I0916 23:57:25.087390  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:25.104356  722351 cli_runner.go:164] Run: docker volume create ha-198834-m02 --label name.minikube.sigs.k8s.io=ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:25.128318  722351 oci.go:103] Successfully created a docker volume ha-198834-m02
	I0916 23:57:25.128423  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --entrypoint /usr/bin/test -v ha-198834-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:25.555443  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m02
	I0916 23:57:25.555486  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.555507  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:25.555574  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.769985  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214340138s)
	I0916 23:57:29.770025  722351 kic.go:203] duration metric: took 4.214511914s to extract preloaded images to volume ...
	W0916 23:57:29.770138  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.770180  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.770230  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.831280  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m02 --name ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m02 --network ha-198834 --ip 192.168.49.3 --volume ha-198834-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:30.118263  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Running}}
	I0916 23:57:30.140753  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.161053  722351 cli_runner.go:164] Run: docker exec ha-198834-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:30.204746  722351 oci.go:144] the created container "ha-198834-m02" has a running status.
	I0916 23:57:30.204782  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa...
	I0916 23:57:30.491277  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:30.491341  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:30.523169  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.546155  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:30.546178  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.603616  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.624695  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.624784  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.648569  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.648946  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.648966  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.800750  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.800784  722351 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0916 23:57:30.800873  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.822237  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.822505  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.822519  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0916 23:57:30.984206  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.984307  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.007082  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.007398  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.007430  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:31.152561  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:31.152598  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:31.152624  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:31.152644  722351 provision.go:84] configureAuth start
	I0916 23:57:31.152709  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:31.171931  722351 provision.go:143] copyHostCerts
	I0916 23:57:31.171978  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172008  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:31.172014  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172081  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:31.172159  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172181  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:31.172185  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172216  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:31.172262  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172279  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:31.172287  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172310  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:31.172361  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0916 23:57:31.314068  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:31.314146  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:31.314208  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.336792  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:31.442195  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:31.442269  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:31.472780  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:31.472841  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:31.499569  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:31.499653  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:31.530277  722351 provision.go:87] duration metric: took 377.61476ms to configureAuth
	I0916 23:57:31.530311  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:31.530528  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:31.530587  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.548573  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.548821  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.548841  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:31.695327  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:31.695357  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:31.695559  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:31.695639  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.715926  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.716269  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.716384  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:31.879960  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:31.880054  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.901465  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.901783  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.901817  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:33.107385  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:31.877658246 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:33.107432  722351 machine.go:96] duration metric: took 2.482713737s to provisionDockerMachine
	I0916 23:57:33.107448  722351 client.go:171] duration metric: took 8.040135103s to LocalClient.Create
	I0916 23:57:33.107471  722351 start.go:167] duration metric: took 8.040214449s to libmachine.API.Create "ha-198834"
	I0916 23:57:33.107480  722351 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0916 23:57:33.107493  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:33.107570  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:33.107624  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.129478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.235200  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:33.239799  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:33.239842  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:33.239854  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:33.239862  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:33.239881  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:33.239961  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:33.240070  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:33.240085  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:33.240211  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:33.252619  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:33.291135  722351 start.go:296] duration metric: took 183.636707ms for postStartSetup
	I0916 23:57:33.291600  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.313645  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:33.314041  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:33.314103  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.337314  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.439716  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:33.445408  722351 start.go:128] duration metric: took 8.380530846s to createHost
	I0916 23:57:33.445437  722351 start.go:83] releasing machines lock for "ha-198834-m02", held for 8.380681461s
	I0916 23:57:33.445500  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.469661  722351 out.go:179] * Found network options:
	I0916 23:57:33.471226  722351 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:33.472373  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:33.472429  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:33.472520  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:33.472550  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:33.472570  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.472621  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.495822  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.496478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.601441  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:33.704002  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:33.704085  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:33.742848  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:33.742881  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:33.742929  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:33.743066  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:33.765394  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:33.781702  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:33.796106  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:33.796186  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:33.811490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.825594  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:33.840006  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.853819  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:33.867424  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:33.882022  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:33.896562  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:33.910813  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:33.923436  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:33.936892  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.033978  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:34.137820  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:34.137955  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:34.138026  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:34.154788  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.170769  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:34.190397  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.207526  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:34.224333  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:34.249827  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:34.255532  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:34.270253  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:34.296311  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:34.391517  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:34.486390  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:34.486452  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:34.512957  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:34.529696  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.623612  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:35.389236  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:35.402665  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:35.418828  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.433733  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:35.524509  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:35.615815  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.688879  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:35.713552  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:35.729264  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.818355  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:35.908063  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.921416  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:35.921483  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:35.925600  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:35.925666  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:35.929510  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:35.970926  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:35.971002  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.001052  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.032731  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:36.033881  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:36.035387  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:36.055948  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:36.061767  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:36.076229  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:57:36.076482  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:36.076794  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:36.099199  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:36.099483  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0916 23:57:36.099498  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:36.099514  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.099667  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:36.099721  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:36.099735  722351 certs.go:256] generating profile certs ...
	I0916 23:57:36.099834  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:36.099867  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0916 23:57:36.099889  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:36.171638  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 ...
	I0916 23:57:36.171669  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4: {Name:mk274e4893d598b40c8fed777bc1c7c2e951159a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.171866  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 ...
	I0916 23:57:36.171885  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4: {Name:mkf2a66869f0c345fb28cc9925dc0bb02623a928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.172011  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:36.172195  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:36.172362  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:36.172381  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:36.172396  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:36.172415  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:36.172438  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:36.172457  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:36.172474  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:36.172493  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:36.172512  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:36.172589  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:36.172634  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:36.172648  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:36.172679  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:36.172703  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:36.172736  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:36.172796  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:36.172840  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.172861  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.172878  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.172963  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:36.194873  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:36.286293  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:36.291948  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:36.308150  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:36.312206  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:36.325598  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:36.329618  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:36.346110  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:36.350017  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:36.365628  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:36.369445  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:36.383675  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:36.387388  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:57:36.403394  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:36.432068  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:36.461592  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:36.491261  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:36.523895  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:36.552719  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:36.580284  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:36.608342  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:36.639670  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:36.672003  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:36.703856  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:36.734275  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:36.755638  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:36.777805  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:36.799338  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:36.821463  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:36.843600  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:57:36.867808  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:36.889233  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:36.896091  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:36.908363  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913145  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913212  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.921857  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:36.934186  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:36.945282  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949180  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949249  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.958068  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:36.970160  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:36.981053  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985350  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985410  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.993828  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:37.004616  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:37.008764  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:37.008830  722351 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0916 23:57:37.008961  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:37.008998  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:37.009050  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:37.026582  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:37.026656  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:37.026738  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:37.036867  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:37.036974  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:37.046606  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:57:37.070259  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:37.092325  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:37.116853  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:37.120789  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:37.137396  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:37.223494  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:37.256254  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:37.256574  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:37.256705  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:37.256762  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:37.278264  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:37.435308  722351 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:37.435366  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:54.013635  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.578241326s)
	I0916 23:57:54.013701  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:54.233708  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:57:54.308006  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:54.383356  722351 start.go:319] duration metric: took 17.126777498s to joinCluster
	I0916 23:57:54.383433  722351 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:54.383691  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:54.385020  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:54.386187  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:54.491315  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:54.505328  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:54.505398  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:54.505659  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508947  722351 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0916 23:57:56.508979  722351 node_ready.go:38] duration metric: took 2.003299323s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508998  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:56.509065  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:56.521258  722351 api_server.go:72] duration metric: took 2.137779117s to wait for apiserver process to appear ...
	I0916 23:57:56.521298  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:56.521326  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:56.527086  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:56.528055  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:56.528078  722351 api_server.go:131] duration metric: took 6.77168ms to wait for apiserver health ...
	I0916 23:57:56.528087  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:56.534412  722351 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:56.534478  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.534486  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.534497  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.534503  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.534515  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534524  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.534535  722351 system_pods.go:61] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534541  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.534547  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.534559  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.534564  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.534667  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.534716  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534725  722351 system_pods.go:61] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534731  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.534743  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.534748  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.534753  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.534758  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.534765  722351 system_pods.go:74] duration metric: took 6.672375ms to wait for pod list to return data ...
	I0916 23:57:56.534774  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:56.538351  722351 default_sa.go:45] found service account: "default"
	I0916 23:57:56.538385  722351 default_sa.go:55] duration metric: took 3.603096ms for default service account to be created ...
	I0916 23:57:56.538399  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:56.542274  722351 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:56.542301  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.542307  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.542311  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.542314  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.542321  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542325  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.542330  722351 system_pods.go:89] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542334  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.542338  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.542344  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.542347  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.542351  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.542356  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542367  722351 system_pods.go:89] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542371  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.542375  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.542377  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.542380  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.542384  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.542393  722351 system_pods.go:126] duration metric: took 3.988364ms to wait for k8s-apps to be running ...
	I0916 23:57:56.542403  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:56.542447  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:56.554466  722351 system_svc.go:56] duration metric: took 12.054188ms WaitForService to wait for kubelet
	I0916 23:57:56.554496  722351 kubeadm.go:578] duration metric: took 2.171026353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:56.554519  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:56.557501  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557532  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557552  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557557  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557561  722351 node_conditions.go:105] duration metric: took 3.037317ms to run NodePressure ...
	I0916 23:57:56.557575  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:56.557610  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:56.559549  722351 out.go:203] 
	I0916 23:57:56.561097  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:56.561232  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.562855  722351 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0916 23:57:56.563951  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:56.565051  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:56.566271  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:56.566290  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:56.566373  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:56.566383  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:56.566485  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:56.566581  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.586635  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:56.586656  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:56.586673  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:56.586704  722351 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:56.586811  722351 start.go:364] duration metric: took 87.391µs to acquireMachinesLock for "ha-198834-m03"
	I0916 23:57:56.586843  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:56.587003  722351 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:56.589063  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:56.589158  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:56.589187  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:56.589263  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:56.589299  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589313  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589365  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:56.589385  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589398  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589634  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:56.607248  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc001595440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:56.607297  722351 kic.go:121] calculated static IP "192.168.49.4" for the "ha-198834-m03" container
	I0916 23:57:56.607371  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:56.624198  722351 cli_runner.go:164] Run: docker volume create ha-198834-m03 --label name.minikube.sigs.k8s.io=ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:56.642183  722351 oci.go:103] Successfully created a docker volume ha-198834-m03
	I0916 23:57:56.642258  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --entrypoint /usr/bin/test -v ha-198834-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:57.021785  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m03
	I0916 23:57:57.021834  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:57.021864  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:57.021952  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:59.672995  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.650992477s)
	I0916 23:57:59.673039  722351 kic.go:203] duration metric: took 2.651177157s to extract preloaded images to volume ...
	W0916 23:57:59.673144  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:59.673190  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:59.673255  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:59.730169  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m03 --name ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m03 --network ha-198834 --ip 192.168.49.4 --volume ha-198834-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:58:00.013728  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Running}}
	I0916 23:58:00.034076  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.054832  722351 cli_runner.go:164] Run: docker exec ha-198834-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:58:00.109517  722351 oci.go:144] the created container "ha-198834-m03" has a running status.
	I0916 23:58:00.109546  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa...
	I0916 23:58:00.621029  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:58:00.621097  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:58:00.651614  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.673435  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:58:00.673460  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:58:00.730412  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.749865  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:58:00.750006  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.771445  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.771738  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.771754  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:58:00.920523  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:00.920553  722351 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0916 23:58:00.920616  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.940561  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.940837  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.940853  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0916 23:58:01.103101  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:01.103204  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:01.125182  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:01.125511  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:01.125543  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:58:01.275155  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:01.275201  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:58:01.275231  722351 ubuntu.go:190] setting up certificates
	I0916 23:58:01.275246  722351 provision.go:84] configureAuth start
	I0916 23:58:01.275318  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:01.296305  722351 provision.go:143] copyHostCerts
	I0916 23:58:01.296378  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296426  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:58:01.296439  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296527  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:58:01.296632  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296656  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:58:01.296682  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296726  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:58:01.296788  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296825  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:58:01.296835  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296924  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:58:01.297040  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0916 23:58:02.100987  722351 provision.go:177] copyRemoteCerts
	I0916 23:58:02.101048  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:58:02.101084  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.119475  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:02.218802  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:58:02.218870  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:58:02.251628  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:58:02.251700  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:58:02.279052  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:58:02.279124  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:58:02.305168  722351 provision.go:87] duration metric: took 1.029902032s to configureAuth
	I0916 23:58:02.305208  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:58:02.305440  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:02.305491  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.322139  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.322413  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.322428  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:58:02.459594  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:58:02.459629  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:58:02.459746  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:58:02.459804  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.476657  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.476985  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.477099  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:58:02.633394  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:58:02.633489  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.651145  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.651390  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.651410  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:58:03.800032  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:58:02.631485455 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:58:03.800077  722351 machine.go:96] duration metric: took 3.050188223s to provisionDockerMachine
	I0916 23:58:03.800094  722351 client.go:171] duration metric: took 7.210891992s to LocalClient.Create
	I0916 23:58:03.800121  722351 start.go:167] duration metric: took 7.210962522s to libmachine.API.Create "ha-198834"
	I0916 23:58:03.800131  722351 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0916 23:58:03.800155  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:58:03.800229  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:58:03.800295  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.817949  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:03.918038  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:58:03.922382  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:58:03.922420  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:58:03.922430  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:58:03.922438  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:58:03.922452  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:58:03.922512  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:58:03.922607  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:58:03.922620  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:58:03.922727  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:58:03.932298  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:03.961387  722351 start.go:296] duration metric: took 161.230642ms for postStartSetup
	I0916 23:58:03.961811  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:03.979123  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:58:03.979395  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:58:03.979437  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.997520  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.091253  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:58:04.096537  722351 start.go:128] duration metric: took 7.509514126s to createHost
	I0916 23:58:04.096585  722351 start.go:83] releasing machines lock for "ha-198834-m03", held for 7.509743952s
	I0916 23:58:04.096660  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:04.115702  722351 out.go:179] * Found network options:
	I0916 23:58:04.117029  722351 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:58:04.118232  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118256  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118281  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118299  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:58:04.118395  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:58:04.118441  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.118449  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:58:04.118515  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.136875  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.137594  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.231418  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:58:04.311016  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:58:04.311108  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:58:04.340810  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:58:04.340841  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.340871  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.340997  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.359059  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:58:04.371794  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:58:04.383345  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:58:04.383421  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:58:04.394513  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.405081  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:58:04.415653  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.426510  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:58:04.436405  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:58:04.447135  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:58:04.457926  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:58:04.469563  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:58:04.478599  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:58:04.488307  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:04.557785  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:58:04.636805  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.636855  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.636899  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:58:04.649865  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.662323  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:58:04.680711  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.693319  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:58:04.705665  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.723842  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:58:04.727547  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:58:04.738845  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:58:04.758974  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:58:04.830471  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:58:04.900429  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:58:04.900482  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:58:04.920093  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:58:04.931599  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:05.002855  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:58:05.807532  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:58:05.819728  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:58:05.832303  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:05.844347  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:58:05.916277  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:58:05.988520  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.055206  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:58:06.080490  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:58:06.092817  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.162707  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:58:06.248276  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:06.261931  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:58:06.262000  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:58:06.265868  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:58:06.265941  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:58:06.269385  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:58:06.305058  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:58:06.305139  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.331725  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.358446  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:58:06.359714  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:58:06.360964  722351 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:58:06.362187  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:58:06.379025  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:58:06.383173  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:06.394963  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:58:06.395208  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:06.395415  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:58:06.412700  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:06.412979  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0916 23:58:06.412992  722351 certs.go:194] generating shared ca certs ...
	I0916 23:58:06.413008  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:06.413150  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:58:06.413202  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:58:06.413213  722351 certs.go:256] generating profile certs ...
	I0916 23:58:06.413290  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:58:06.413316  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0916 23:58:06.413331  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:58:07.059616  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 ...
	I0916 23:58:07.059648  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783: {Name:mka6f3e20ae0db98330bce12c7c53c8ceb029f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.059850  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 ...
	I0916 23:58:07.059873  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783: {Name:mk88fba5116449476945068bb066a5fae095ca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.060019  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:58:07.060173  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:58:07.060303  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:58:07.060320  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:58:07.060332  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:58:07.060346  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:58:07.060359  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:58:07.060371  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:58:07.060383  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:58:07.060395  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:58:07.060407  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:58:07.060462  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:58:07.060492  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:58:07.060502  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:58:07.060525  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:58:07.060546  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:58:07.060571  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:58:07.060609  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:07.060634  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.060648  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.060666  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.060725  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:07.077675  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:07.167227  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:58:07.171339  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:58:07.184631  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:58:07.188345  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:58:07.201195  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:58:07.204727  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:58:07.217344  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:58:07.220977  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:58:07.233804  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:58:07.237296  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:58:07.250936  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:58:07.254504  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:58:07.267513  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:58:07.293250  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:58:07.319357  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:58:07.345045  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:58:07.370793  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:58:07.397411  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:58:07.422329  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:58:07.447186  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:58:07.472564  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:58:07.500373  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:58:07.526598  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:58:07.552426  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:58:07.570062  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:58:07.589628  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:58:07.609486  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:58:07.630629  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:58:07.650280  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:58:07.669308  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:58:07.687700  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:58:07.694681  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:58:07.705784  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709662  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709739  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.716649  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:58:07.726290  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:58:07.736118  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740041  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740101  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.747081  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:58:07.757480  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:58:07.767310  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771054  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771114  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.778013  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:58:07.788245  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:58:07.792058  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:58:07.792123  722351 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0916 23:58:07.792232  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:58:07.792263  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:58:07.792307  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:58:07.805180  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:58:07.805247  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:58:07.805296  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:58:07.814610  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:58:07.814678  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:58:07.825352  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:58:07.844047  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:58:07.862757  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:58:07.883848  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:58:07.887562  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:07.899646  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:07.974384  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:08.004718  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:08.005001  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:08.005124  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:58:08.005169  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:08.024622  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:08.169785  722351 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:08.169853  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:58:25.708852  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (17.538975369s)
	I0916 23:58:25.708884  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:58:25.930343  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m03 minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:58:26.006016  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:58:26.089408  722351 start.go:319] duration metric: took 18.084403561s to joinCluster
	I0916 23:58:26.089494  722351 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:26.089805  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:26.091004  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:58:26.092246  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:26.200675  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:26.214424  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:58:26.214506  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:58:26.214713  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	W0916 23:58:28.218137  722351 node_ready.go:57] node "ha-198834-m03" has "Ready":"False" status (will retry)
	I0916 23:58:29.718579  722351 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0916 23:58:29.718621  722351 node_ready.go:38] duration metric: took 3.503891029s for node "ha-198834-m03" to be "Ready" ...
	I0916 23:58:29.718640  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:58:29.718688  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:58:29.730821  722351 api_server.go:72] duration metric: took 3.641289304s to wait for apiserver process to appear ...
	I0916 23:58:29.730847  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:58:29.730870  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:58:29.736447  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:58:29.737363  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:58:29.737382  722351 api_server.go:131] duration metric: took 6.528439ms to wait for apiserver health ...
	I0916 23:58:29.737390  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:58:29.743125  722351 system_pods.go:59] 27 kube-system pods found
	I0916 23:58:29.743154  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.743159  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.743162  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.743166  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.743169  722351 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.743172  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.743179  722351 system_pods.go:61] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743182  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.743189  722351 system_pods.go:61] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743193  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.743198  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.743202  722351 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.743206  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.743209  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.743212  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.743216  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.743220  722351 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743227  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.743231  722351 system_pods.go:61] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743236  722351 system_pods.go:61] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743241  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.743245  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.743248  722351 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.743251  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.743254  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.743257  722351 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.743260  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.743267  722351 system_pods.go:74] duration metric: took 5.871633ms to wait for pod list to return data ...
	I0916 23:58:29.743275  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:58:29.746038  722351 default_sa.go:45] found service account: "default"
	I0916 23:58:29.746059  722351 default_sa.go:55] duration metric: took 2.77496ms for default service account to be created ...
	I0916 23:58:29.746067  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:58:29.751428  722351 system_pods.go:86] 27 kube-system pods found
	I0916 23:58:29.751454  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.751459  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.751463  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.751466  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.751469  722351 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.751472  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.751478  722351 system_pods.go:89] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751482  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.751490  722351 system_pods.go:89] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751494  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.751498  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.751501  722351 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.751504  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.751508  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.751512  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.751515  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.751520  722351 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751526  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.751530  722351 system_pods.go:89] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751535  722351 system_pods.go:89] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751540  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.751545  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.751550  722351 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.751554  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.751558  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.751563  722351 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.751569  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.751577  722351 system_pods.go:126] duration metric: took 5.505301ms to wait for k8s-apps to be running ...
	I0916 23:58:29.751587  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:58:29.751637  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:58:29.764067  722351 system_svc.go:56] duration metric: took 12.467532ms WaitForService to wait for kubelet
	I0916 23:58:29.764102  722351 kubeadm.go:578] duration metric: took 3.674577242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:58:29.764127  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:58:29.767676  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767699  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767712  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767717  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767721  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767724  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767728  722351 node_conditions.go:105] duration metric: took 3.595861ms to run NodePressure ...
	I0916 23:58:29.767739  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:58:29.767761  722351 start.go:255] writing updated cluster config ...
	I0916 23:58:29.768076  722351 ssh_runner.go:195] Run: rm -f paused
	I0916 23:58:29.772054  722351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:29.772528  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:58:29.776391  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781517  722351 pod_ready.go:94] pod "coredns-66bc5c9577-5wx4k" is "Ready"
	I0916 23:58:29.781544  722351 pod_ready.go:86] duration metric: took 5.128752ms for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781552  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.786524  722351 pod_ready.go:94] pod "coredns-66bc5c9577-mjbz6" is "Ready"
	I0916 23:58:29.786549  722351 pod_ready.go:86] duration metric: took 4.991527ms for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.789148  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793593  722351 pod_ready.go:94] pod "etcd-ha-198834" is "Ready"
	I0916 23:58:29.793614  722351 pod_ready.go:86] duration metric: took 4.43654ms for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793622  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797833  722351 pod_ready.go:94] pod "etcd-ha-198834-m02" is "Ready"
	I0916 23:58:29.797856  722351 pod_ready.go:86] duration metric: took 4.228462ms for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797864  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.974055  722351 request.go:683] "Waited before sending request" delay="176.0853ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.173047  722351 request.go:683] "Waited before sending request" delay="193.205885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.373324  722351 request.go:683] "Waited before sending request" delay="74.260595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.573189  722351 request.go:683] "Waited before sending request" delay="196.187075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.973960  722351 request.go:683] "Waited before sending request" delay="171.749825ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.977519  722351 pod_ready.go:94] pod "etcd-ha-198834-m03" is "Ready"
	I0916 23:58:30.977548  722351 pod_ready.go:86] duration metric: took 1.179678858s for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.172996  722351 request.go:683] "Waited before sending request" delay="195.270589ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:58:31.176896  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.373184  722351 request.go:683] "Waited before sending request" delay="196.155083ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834"
	I0916 23:58:31.573091  722351 request.go:683] "Waited before sending request" delay="196.292532ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:31.576254  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834" is "Ready"
	I0916 23:58:31.576280  722351 pod_ready.go:86] duration metric: took 399.33205ms for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.576288  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.773718  722351 request.go:683] "Waited before sending request" delay="197.34633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m02"
	I0916 23:58:31.973716  722351 request.go:683] "Waited before sending request" delay="196.477986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:31.978504  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m02" is "Ready"
	I0916 23:58:31.978555  722351 pod_ready.go:86] duration metric: took 402.258846ms for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.978567  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.172964  722351 request.go:683] "Waited before sending request" delay="194.26238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m03"
	I0916 23:58:32.373491  722351 request.go:683] "Waited before sending request" delay="197.345263ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:32.376525  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m03" is "Ready"
	I0916 23:58:32.376552  722351 pod_ready.go:86] duration metric: took 397.9768ms for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.573017  722351 request.go:683] "Waited before sending request" delay="196.299414ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:58:32.577487  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.773969  722351 request.go:683] "Waited before sending request" delay="196.341624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834"
	I0916 23:58:32.973585  722351 request.go:683] "Waited before sending request" delay="196.346276ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:32.977689  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834" is "Ready"
	I0916 23:58:32.977721  722351 pod_ready.go:86] duration metric: took 400.206125ms for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.977735  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.173032  722351 request.go:683] "Waited before sending request" delay="195.180271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m02"
	I0916 23:58:33.373811  722351 request.go:683] "Waited before sending request" delay="197.350717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:33.376722  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m02" is "Ready"
	I0916 23:58:33.376747  722351 pod_ready.go:86] duration metric: took 399.004052ms for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.376756  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.573048  722351 request.go:683] "Waited before sending request" delay="196.186349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m03"
	I0916 23:58:33.773733  722351 request.go:683] "Waited before sending request" delay="197.347012ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:33.776944  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m03" is "Ready"
	I0916 23:58:33.776972  722351 pod_ready.go:86] duration metric: took 400.209131ms for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.973425  722351 request.go:683] "Waited before sending request" delay="196.344301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:58:33.977203  722351 pod_ready.go:83] waiting for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.173688  722351 request.go:683] "Waited before sending request" delay="196.345801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tkhn"
	I0916 23:58:34.373026  722351 request.go:683] "Waited before sending request" delay="196.256084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:34.376079  722351 pod_ready.go:94] pod "kube-proxy-5tkhn" is "Ready"
	I0916 23:58:34.376106  722351 pod_ready.go:86] duration metric: took 398.875647ms for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.376114  722351 pod_ready.go:83] waiting for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.573402  722351 request.go:683] "Waited before sending request" delay="197.174223ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:34.773022  722351 request.go:683] "Waited before sending request" delay="196.289258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:34.973958  722351 request.go:683] "Waited before sending request" delay="97.260541ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:35.173637  722351 request.go:683] "Waited before sending request" delay="196.407064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.573487  722351 request.go:683] "Waited before sending request" delay="193.254271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.973307  722351 request.go:683] "Waited before sending request" delay="93.259111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	W0916 23:58:36.383328  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:38.882062  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:40.882520  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:42.883194  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:45.382843  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:47.882744  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:49.882993  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:51.883265  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:54.383005  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:56.882555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:59.382463  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:01.382897  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:03.883583  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:06.382581  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:08.882275  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:11.382224  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:13.382333  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:15.882727  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:18.383800  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:20.882547  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:22.883081  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:25.383627  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:27.882377  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:29.882787  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:31.884042  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:34.382932  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:36.882730  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:38.882959  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:40.883411  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:43.382771  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:45.882938  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:48.381607  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:50.382229  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:52.382889  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:54.882546  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:56.882802  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:58.882939  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:00.883550  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:03.382872  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:05.383021  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:07.384166  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:09.883064  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:11.884141  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:14.383248  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:16.883441  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:18.884438  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:21.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:23.883713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:26.383093  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:28.883552  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:31.383392  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:33.883626  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:35.883823  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:38.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:40.883430  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:43.383026  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:45.883091  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:48.382865  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:50.882713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:52.882989  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:55.383076  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:57.383555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:59.882704  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:01.883495  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:04.382406  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:06.383424  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:08.883456  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:11.382988  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:13.882379  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:15.883651  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:18.382551  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:20.382997  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:22.882943  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:24.883256  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:27.383660  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:29.882955  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:32.383364  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	I0917 00:01:34.382530  722351 pod_ready.go:94] pod "kube-proxy-d8brp" is "Ready"
	I0917 00:01:34.382562  722351 pod_ready.go:86] duration metric: took 3m0.006439942s for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.382572  722351 pod_ready.go:83] waiting for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.387645  722351 pod_ready.go:94] pod "kube-proxy-h2fxd" is "Ready"
	I0917 00:01:34.387677  722351 pod_ready.go:86] duration metric: took 5.098826ms for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.390707  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396086  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834" is "Ready"
	I0917 00:01:34.396115  722351 pod_ready.go:86] duration metric: took 5.379692ms for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396126  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400646  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m02" is "Ready"
	I0917 00:01:34.400670  722351 pod_ready.go:86] duration metric: took 4.536355ms for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400680  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.577209  722351 request.go:683] "Waited before sending request" delay="174.117357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0917 00:01:34.580767  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m03" is "Ready"
	I0917 00:01:34.580796  722351 pod_ready.go:86] duration metric: took 180.109317ms for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.580808  722351 pod_ready.go:40] duration metric: took 3m4.808720134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:34.629691  722351 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:01:34.631405  722351 out.go:179] * Done! kubectl is now configured to use "ha-198834" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50aecbe9f874a63c5159d55af06211bca7903e623f01f1e603f267caaf6da9a7/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.259744438Z" level=info msg="ignoring event" container=fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.275867775Z" level=info msg="ignoring event" container=64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.320870537Z" level=info msg="ignoring event" container=310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.336829292Z" level=info msg="ignoring event" container=a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687384709Z" level=info msg="ignoring event" container=11889e34950f849cf7805c6d56f1957ad9d5af727f4810f2da728671398b9f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687719889Z" level=info msg="ignoring event" container=1ccdf9f33d5601763297f230a2f6e51620db2ed183e9f4b9179f4ccef579dfac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756623723Z" level=info msg="ignoring event" container=bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756673284Z" level=info msg="ignoring event" container=870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:01:36 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:01:37 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:37Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	1ccdf9f33d560       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   bf6d6b59f2413       coredns-66bc5c9577-mjbz6
	11889e34950f8       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   870758f308362       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              6 minutes ago       Running             kindnet-cni               0                   f541f878be896       kindnet-h28vp
	b16ddbbc469c5       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   50aecbe9f874a       storage-provisioner
	2da683f529549       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	8a32665f7e3e4       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     6 minutes ago       Running             kube-vip                  0                   5e4aed7a38e18       kube-vip-ha-198834
	4f536df8f44eb       a0af72f2ec6d6                                                                                         6 minutes ago       Running             kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         6 minutes ago       Running             kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         6 minutes ago       Running             kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [11889e34950f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50107 - 45856 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000165011s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50484 - 7509 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000096464s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [1ccdf9f33d56] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49262 - 38359 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000112146s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:51442 - 41164 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000125545s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	
	
	==> coredns [f4f7ea59034e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3525bf030f0d49c1ab057441433c477c
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m45s
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m45s
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m51s
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m45s
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m53s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m44s  kube-proxy       
	  Normal  Starting                 6m51s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m51s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m51s  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m51s  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m51s  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m46s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m46s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 35caf7934a824e33949ce426f7316bfd
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m13s
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m16s
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m9s   kube-proxy       
	  Normal  RegisteredNode  6m12s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  6m11s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  5m46s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	Name:               ha-198834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:04:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-198834-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4e7dc065e4fa49595825994457b8e
	  System UUID:                6f810798-3461-44d1-91c3-d55b483ec842
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l2jn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 etcd-ha-198834-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m40s
	  kube-system                 kindnet-67fn9                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m35s
	  kube-system                 kube-apiserver-ha-198834-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-ha-198834-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-d8brp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-scheduler-ha-198834-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-vip-ha-198834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  5m42s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  5m41s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  5m41s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"info","ts":"2025-09-16T23:58:12.699050Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-16T23:58:12.699094Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.699108Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702028Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-16T23:58:12.702080Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.702094Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.733438Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.736369Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-16T23:58:12.759123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:34222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:58:12.760774Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(5981864578030751937 12593026477526642892 12956928539845794953)"}
	{"level":"info","ts":"2025-09-16T23:58:12.760967Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.761007Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:19.991223Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:25.496900Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:30.072550Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:32.068856Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:40.123997Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:42.678047Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","bytes":1393601,"size":"1.4 MB","took":"30.013494343s"}
	{"level":"info","ts":"2025-09-17T00:03:27.515545Z","caller":"traceutil/trace.go:172","msg":"trace[429348455] transaction","detail":"{read_only:false; response_revision:1816; number_of_response:1; }","duration":"111.335739ms","start":"2025-09-17T00:03:27.404190Z","end":"2025-09-17T00:03:27.515525Z","steps":["trace[429348455] 'process raft request'  (duration: 111.14691ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:03:45.321237Z","caller":"traceutil/trace.go:172","msg":"trace[1168397664] transaction","detail":"{read_only:false; response_revision:1860; number_of_response:1; }","duration":"125.134331ms","start":"2025-09-17T00:03:45.196084Z","end":"2025-09-17T00:03:45.321218Z","steps":["trace[1168397664] 'process raft request'  (duration: 124.989711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:03:45.959335Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.771431ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040017681051689 > lease_revoke:<id:50c19954f670abb9>","response":"size:29"}
	{"level":"info","ts":"2025-09-17T00:03:45.960220Z","caller":"traceutil/trace.go:172","msg":"trace[1051336348] linearizableReadLoop","detail":"{readStateIndex:2294; appliedIndex:2293; }","duration":"253.51671ms","start":"2025-09-17T00:03:45.706683Z","end":"2025-09-17T00:03:45.960199Z","steps":["trace[1051336348] 'read index received'  (duration: 352.53µs)","trace[1051336348] 'applied index is now lower than readState.Index'  (duration: 253.162091ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:03:45.960342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"293.914233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:03:45.960374Z","caller":"traceutil/trace.go:172","msg":"trace[305973442] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:1862; }","duration":"293.967568ms","start":"2025-09-17T00:03:45.666397Z","end":"2025-09-17T00:03:45.960365Z","steps":["trace[305973442] 'agreement among raft nodes before linearized reading'  (duration: 293.876046ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:03:45.960547Z","caller":"traceutil/trace.go:172","msg":"trace[2000303218] transaction","detail":"{read_only:false; response_revision:1863; number_of_response:1; }","duration":"248.094618ms","start":"2025-09-17T00:03:45.712439Z","end":"2025-09-17T00:03:45.960534Z","steps":["trace[2000303218] 'process raft request'  (duration: 247.028417ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:04:09 up  2:46,  0 users,  load average: 1.95, 1.42, 1.14
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:03:20.423393       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:30.423984       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:30.424027       1 main.go:301] handling current node
	I0917 00:03:30.424048       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:30.424055       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:30.424343       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:30.424355       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:40.423382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:40.423419       1 main.go:301] handling current node
	I0917 00:03:40.423434       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:40.423439       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:03:40.423677       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:40.423692       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:50.420798       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:50.420829       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:50.421086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:50.421118       1 main.go:301] handling current node
	I0917 00:03:50.421132       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:50.421136       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:04:00.426015       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:00.426065       1 main.go:301] handling current node
	I0917 00:04:00.426087       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:04:00.426094       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:04:00.426329       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:04:00.426343       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0916 23:57:24.194840       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.200277       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.242655       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0916 23:58:29.048843       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:34.361323       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:36.632983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:02.667929       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:58.976838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:19.218755       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:15.644338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:43.338268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:03:18.851078       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58262: use of closed network connection
	E0917 00:03:19.024113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58282: use of closed network connection
	E0917 00:03:19.194951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58306: use of closed network connection
	E0917 00:03:19.388722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58332: use of closed network connection
	E0917 00:03:19.557698       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58342: use of closed network connection
	E0917 00:03:19.744687       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58348: use of closed network connection
	E0917 00:03:19.919836       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58362: use of closed network connection
	E0917 00:03:20.087518       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58376: use of closed network connection
	E0917 00:03:20.254024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58398: use of closed network connection
	E0917 00:03:22.459781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48968: use of closed network connection
	E0917 00:03:22.632160       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48992: use of closed network connection
	E0917 00:03:22.799975       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:49024: use of closed network connection
	I0917 00:03:39.352525       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:47.239226       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.036759       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.036813       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5897933c-61bc-4eef-8922-66c37ba68c57(kube-system/kindnet-rwc59) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	E0916 23:58:30.036834       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	I0916 23:58:30.038109       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.048424       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:30.048665       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4edbf3a1-360c-4f5c-81a3-aa63deb9a159(kube-system/kindnet-lpn5v) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	
	
	==> kubelet <==
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349086    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51d39f-7e43-461b-a021-13ddf0cb9845-lib-modules\") pod \"kindnet-h28vp\" (UID: \"6c51d39f-7e43-461b-a021-13ddf0cb9845\") " pod="kube-system/kindnet-h28vp"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349103    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-xtables-lock\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349123    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n49\" (UniqueName: \"kubernetes.io/projected/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-kube-api-access-84n49\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650251    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-config-volume\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650425    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5ns\" (UniqueName: \"kubernetes.io/projected/c918625f-be11-44bf-8b82-d4c21b8993d1-kube-api-access-th5ns\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650660    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c918625f-be11-44bf-8b82-d4c21b8993d1-config-volume\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650701    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmb4\" (UniqueName: \"kubernetes.io/projected/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-kube-api-access-xhmb4\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.014693    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tkhn" podStartSLOduration=1.014665687 podStartE2EDuration="1.014665687s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:24.932304069 +0000 UTC m=+6.176281069" watchObservedRunningTime="2025-09-16 23:57:25.014665687 +0000 UTC m=+6.258642688"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.042478    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.046332    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f541f878be89694936d8219d8e7fc682a8a169d9edf6417f067927aa4748c0ae"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153403    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrvp\" (UniqueName: \"kubernetes.io/projected/6b6f64f3-2647-4e13-be41-47fcc6111f3e-kube-api-access-jqrvp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153458    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b6f64f3-2647-4e13-be41-47fcc6111f3e-tmp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098005    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5wx4k" podStartSLOduration=2.097979793 podStartE2EDuration="2.097979793s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.086842117 +0000 UTC m=+7.330819118" watchObservedRunningTime="2025-09-16 23:57:26.097979793 +0000 UTC m=+7.341956793"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098130    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098124108 podStartE2EDuration="1.098124108s" podCreationTimestamp="2025-09-16 23:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.097817254 +0000 UTC m=+7.341794256" watchObservedRunningTime="2025-09-16 23:57:26.098124108 +0000 UTC m=+7.342101108"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.159968    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mjbz6" podStartSLOduration=5.159946005 podStartE2EDuration="5.159946005s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.124330373 +0000 UTC m=+7.368307374" watchObservedRunningTime="2025-09-16 23:57:29.159946005 +0000 UTC m=+10.403923006"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.193262    2468 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.194144    2468 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 23:57:30 ha-198834 kubelet[2468]: I0916 23:57:30.158085    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h28vp" podStartSLOduration=1.342825895 podStartE2EDuration="6.158061718s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="2025-09-16 23:57:24.955662014 +0000 UTC m=+6.199639012" lastFinishedPulling="2025-09-16 23:57:29.770897851 +0000 UTC m=+11.014874835" observedRunningTime="2025-09-16 23:57:30.157595407 +0000 UTC m=+11.401572408" watchObservedRunningTime="2025-09-16 23:57:30.158061718 +0000 UTC m=+11.402038720"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.230434    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.258365    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370599    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370662    2468 scope.go:117] "RemoveContainer" containerID="fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.388953    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.389033    2468 scope.go:117] "RemoveContainer" containerID="64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea"
	Sep 17 00:01:35 ha-198834 kubelet[2468]: I0917 00:01:35.703764    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5r6\" (UniqueName: \"kubernetes.io/projected/a7cf1231-2a12-4247-a01a-2c2f02f5f2d8-kube-api-access-vt5r6\") pod \"busybox-7b57f96db7-pstjp\" (UID: \"a7cf1231-2a12-4247-a01a-2c2f02f5f2d8\") " pod="default/busybox-7b57f96db7-pstjp"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (15.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 node stop m02 --alsologtostderr -v 5: (10.729247278s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (530.518381ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:04:20.936770  752920 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:04:20.937123  752920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:04:20.937137  752920 out.go:374] Setting ErrFile to fd 2...
	I0917 00:04:20.937141  752920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:04:20.937409  752920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:04:20.937600  752920 out.go:368] Setting JSON to false
	I0917 00:04:20.937621  752920 mustload.go:65] Loading cluster: ha-198834
	I0917 00:04:20.937687  752920 notify.go:220] Checking for updates...
	I0917 00:04:20.938005  752920 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:04:20.938028  752920 status.go:174] checking status of ha-198834 ...
	I0917 00:04:20.938485  752920 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:04:20.957215  752920 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:04:20.957272  752920 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:04:20.957573  752920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:04:20.974531  752920 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:04:20.974755  752920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:04:20.974793  752920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:04:20.992549  752920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:04:21.087519  752920 ssh_runner.go:195] Run: systemctl --version
	I0917 00:04:21.092598  752920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:04:21.105649  752920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:04:21.160286  752920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:false NGoroutines:66 SystemTime:2025-09-17 00:04:21.150597215 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:04:21.160800  752920 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:04:21.160832  752920 api_server.go:166] Checking apiserver status ...
	I0917 00:04:21.160873  752920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:04:21.173623  752920 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:04:21.183656  752920 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:04:21.183722  752920 ssh_runner.go:195] Run: ls
	I0917 00:04:21.187399  752920 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:04:21.193202  752920 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:04:21.193241  752920 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:04:21.193252  752920 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:04:21.193273  752920 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:04:21.193554  752920 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:04:21.211115  752920 status.go:371] ha-198834-m02 host status = "Stopped" (err=<nil>)
	I0917 00:04:21.211139  752920 status.go:384] host is not running, skipping remaining checks
	I0917 00:04:21.211147  752920 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:04:21.211172  752920 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:04:21.211436  752920 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:04:21.228862  752920 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:04:21.228887  752920 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:04:21.229194  752920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:04:21.246638  752920 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:04:21.246925  752920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:04:21.246968  752920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:04:21.263922  752920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:04:21.357404  752920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:04:21.370116  752920 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:04:21.370146  752920 api_server.go:166] Checking apiserver status ...
	I0917 00:04:21.370187  752920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:04:21.381410  752920 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:04:21.391413  752920 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:04:21.391474  752920 ssh_runner.go:195] Run: ls
	I0917 00:04:21.395583  752920 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:04:21.400316  752920 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:04:21.400345  752920 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:04:21.400356  752920 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:04:21.400374  752920 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:04:21.400670  752920 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:04:21.417714  752920 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:04:21.417743  752920 status.go:384] host is not running, skipping remaining checks
	I0917 00:04:21.417752  752920 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5": ha-198834
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-198834-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-198834-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-198834-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5": ha-198834
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-198834-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-198834-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-198834-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:57:02.530585618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6698b0ad85a9078b37114c4e66646c6dc7a67a706d28557d80b29fea1d15d512",
	            "SandboxKey": "/var/run/docker/netns/6698b0ad85a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:eb:f5:3a:ee:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "669cb4f772890bad35a4ad4cdb1934f42912d7e03fc353fd08c3e3a046cfba54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.074425318s)
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m03.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m03_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt                                                           │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ node    │ ha-198834 node stop m02 --alsologtostderr -v 5                                                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:58.042095  722351 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:58.042245  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042257  722351 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:58.042263  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042455  722351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:58.043028  722351 out.go:368] Setting JSON to false
	I0916 23:56:58.043951  722351 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9550,"bootTime":1758057468,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:58.044043  722351 start.go:140] virtualization: kvm guest
	I0916 23:56:58.045935  722351 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:58.047229  722351 notify.go:220] Checking for updates...
	I0916 23:56:58.047241  722351 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:58.048693  722351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:58.049858  722351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:58.051172  722351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:58.052335  722351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:58.053390  722351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:58.054603  722351 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:58.077260  722351 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:58.077444  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.132853  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.122248025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.132998  722351 docker.go:318] overlay module found
	I0916 23:56:58.135611  722351 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:58.136750  722351 start.go:304] selected driver: docker
	I0916 23:56:58.136770  722351 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:58.136782  722351 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:58.137364  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.190249  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.179811473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.190455  722351 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:58.190736  722351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:58.192641  722351 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:58.193978  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:56:58.194069  722351 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:58.194094  722351 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:58.194188  722351 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:58.195605  722351 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0916 23:56:58.196688  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:56:58.197669  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:58.198952  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.199018  722351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0916 23:56:58.199034  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:58.199064  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:58.199149  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:58.199167  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:56:58.199618  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:56:58.199650  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json: {Name:mkfd30616e0167206552e80675557cfcc4fee172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:58.218451  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:58.218470  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:58.218487  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:58.218525  722351 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:58.218643  722351 start.go:364] duration metric: took 94.227µs to acquireMachinesLock for "ha-198834"
	I0916 23:56:58.218683  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:56:58.218779  722351 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:58.220943  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:58.221292  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:56:58.221335  722351 client.go:168] LocalClient.Create starting
	I0916 23:56:58.221405  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:56:58.221441  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221461  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221543  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:56:58.221570  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221588  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221956  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:58.238665  722351 cli_runner.go:211] docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:58.238743  722351 network_create.go:284] running [docker network inspect ha-198834] to gather additional debugging logs...
	I0916 23:56:58.238769  722351 cli_runner.go:164] Run: docker network inspect ha-198834
	W0916 23:56:58.254999  722351 cli_runner.go:211] docker network inspect ha-198834 returned with exit code 1
	I0916 23:56:58.255086  722351 network_create.go:287] error running [docker network inspect ha-198834]: docker network inspect ha-198834: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834 not found
	I0916 23:56:58.255122  722351 network_create.go:289] output of [docker network inspect ha-198834]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834 not found
	
	** /stderr **
	I0916 23:56:58.255285  722351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:58.272422  722351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b56820}
	I0916 23:56:58.272473  722351 network_create.go:124] attempt to create docker network ha-198834 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:58.272524  722351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-198834 ha-198834
	I0916 23:56:58.332062  722351 network_create.go:108] docker network ha-198834 192.168.49.0/24 created
	I0916 23:56:58.332109  722351 kic.go:121] calculated static IP "192.168.49.2" for the "ha-198834" container
	I0916 23:56:58.332180  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:58.347722  722351 cli_runner.go:164] Run: docker volume create ha-198834 --label name.minikube.sigs.k8s.io=ha-198834 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:58.365722  722351 oci.go:103] Successfully created a docker volume ha-198834
	I0916 23:56:58.365811  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --entrypoint /usr/bin/test -v ha-198834:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:58.752716  722351 oci.go:107] Successfully prepared a docker volume ha-198834
	I0916 23:56:58.752766  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.752791  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:58.752860  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:02.431811  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.678879308s)
	I0916 23:57:02.431852  722351 kic.go:203] duration metric: took 3.679056906s to extract preloaded images to volume ...
	W0916 23:57:02.431981  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:02.432030  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:02.432094  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:02.483868  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834 --name ha-198834 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834 --network ha-198834 --ip 192.168.49.2 --volume ha-198834:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:02.749244  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Running}}
	I0916 23:57:02.769059  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:02.787342  722351 cli_runner.go:164] Run: docker exec ha-198834 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:02.836161  722351 oci.go:144] the created container "ha-198834" has a running status.
	I0916 23:57:02.836195  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa...
	I0916 23:57:03.023198  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:03.023332  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:03.051071  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.071057  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:03.071081  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:03.121506  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.138447  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:03.138553  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.156407  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.156657  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.156674  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:03.295893  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.295938  722351 ubuntu.go:182] provisioning hostname "ha-198834"
	I0916 23:57:03.296023  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.314748  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.314993  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.315008  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0916 23:57:03.463642  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.463716  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.480946  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.481224  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.481264  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:03.616528  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:03.616561  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:03.616587  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:03.616603  722351 provision.go:84] configureAuth start
	I0916 23:57:03.616666  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:03.633505  722351 provision.go:143] copyHostCerts
	I0916 23:57:03.633553  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633590  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:03.633601  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633689  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:03.633796  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633824  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:03.633834  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633870  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:03.633969  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.633996  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:03.634007  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.634050  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:03.634188  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0916 23:57:03.786555  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:03.786617  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:03.786691  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.804115  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:03.900955  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:03.901014  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:57:03.928655  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:03.928721  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:03.953468  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:03.953537  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:03.978330  722351 provision.go:87] duration metric: took 361.708211ms to configureAuth
	I0916 23:57:03.978356  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:03.978536  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:03.978599  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.995700  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.995934  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.995954  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:04.131514  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:04.131541  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:04.131675  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:04.131752  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.148752  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.148996  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.149060  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:04.298185  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:04.298270  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.315091  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.315309  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.315326  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:05.420254  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:04.295122578 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:05.420296  722351 machine.go:96] duration metric: took 2.281822221s to provisionDockerMachine
	I0916 23:57:05.420315  722351 client.go:171] duration metric: took 7.198967751s to LocalClient.Create
	I0916 23:57:05.420340  722351 start.go:167] duration metric: took 7.199048943s to libmachine.API.Create "ha-198834"
	I0916 23:57:05.420350  722351 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0916 23:57:05.420364  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:05.420443  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:05.420495  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.437726  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.536164  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:05.539580  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:05.539616  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:05.539633  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:05.539639  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:05.539653  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:05.539713  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:05.539819  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:05.539836  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:05.540001  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:05.548691  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:05.575226  722351 start.go:296] duration metric: took 154.859714ms for postStartSetup
	I0916 23:57:05.575586  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.591876  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:05.592351  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:05.592412  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.609076  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.701881  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:05.706378  722351 start.go:128] duration metric: took 7.487581015s to createHost
	I0916 23:57:05.706400  722351 start.go:83] releasing machines lock for "ha-198834", held for 7.487744986s
	I0916 23:57:05.706457  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.723047  722351 ssh_runner.go:195] Run: cat /version.json
	I0916 23:57:05.723106  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.723117  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:05.723202  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.739830  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.739978  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.900291  722351 ssh_runner.go:195] Run: systemctl --version
	I0916 23:57:05.905029  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:05.909440  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:05.939050  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:05.939153  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:05.968631  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:05.968659  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:05.968693  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:05.968830  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:05.985490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:05.997349  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:06.007949  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:06.008036  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:06.018490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.028804  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:06.039330  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.049816  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:06.059493  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:06.069825  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:06.080461  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:06.091039  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:06.100019  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:06.109126  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.178675  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:06.251706  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:06.251760  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:06.251809  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:06.264383  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.275792  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:06.294666  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.306227  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:06.317564  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:06.334759  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:06.338327  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:06.348543  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:06.366680  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:06.432452  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:06.496386  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:06.496496  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:06.515617  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:06.527317  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.590441  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:07.360810  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:07.372759  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:07.384493  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.396808  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:07.466973  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:07.538629  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.607976  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:07.630119  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:07.642121  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.709050  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:07.784177  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.797686  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:07.797763  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:07.801576  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:07.801630  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:07.804977  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:07.837851  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:07.837957  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.862098  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.888678  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:07.888755  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:07.905526  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:07.909605  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:07.921677  722351 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:57:07.921793  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:07.921842  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.943020  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.943041  722351 docker.go:621] Images already preloaded, skipping extraction
	I0916 23:57:07.943097  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.963583  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.963609  722351 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:57:07.963623  722351 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0916 23:57:07.963750  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:07.963822  722351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 23:57:08.012977  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:08.013007  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:08.013021  722351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:57:08.013044  722351 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:57:08.013180  722351 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:57:08.013203  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:08.013244  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:08.026529  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:08.026652  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:08.026716  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:08.036301  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:08.036379  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:57:08.046128  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 23:57:08.064738  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:08.083216  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:57:08.101114  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:57:08.121332  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:08.125035  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:08.136734  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:08.207460  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:08.231438  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0916 23:57:08.231468  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:08.231491  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.231634  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:08.231682  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:08.231692  722351 certs.go:256] generating profile certs ...
	I0916 23:57:08.231748  722351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:08.231761  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt with IP's: []
	I0916 23:57:08.595971  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt ...
	I0916 23:57:08.596008  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt: {Name:mk045c8005e18afdd173496398fb640e85421530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596237  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key ...
	I0916 23:57:08.596255  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key: {Name:mkec7f349d5172bad8ab50dce27926cf4a2810b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596372  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28
	I0916 23:57:08.596390  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:57:08.930707  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 ...
	I0916 23:57:08.930740  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28: {Name:mke8743bf1c0faa0b20cb0336c0e1879fcb77e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.930956  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 ...
	I0916 23:57:08.930975  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28: {Name:mkd63d446f2fe51bc154cd1e5df7f39c484f911b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.931094  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:08.931221  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:08.931283  722351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:08.931298  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt with IP's: []
	I0916 23:57:09.286083  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt ...
	I0916 23:57:09.286118  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt: {Name:mk7d8f9e6931aff0b35e5110e6bb582a3f00c824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286322  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key ...
	I0916 23:57:09.286339  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key: {Name:mkaeef389ff7f9a0b6729cce56a45b0b3aa13296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286448  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:09.286467  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:09.286479  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:09.286489  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:09.286513  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:09.286527  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:09.286538  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:09.286550  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:09.286602  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:09.286641  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:09.286650  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:09.286674  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:09.286702  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:09.286730  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:09.286767  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:09.286792  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.286805  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.286817  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.287381  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:09.312982  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:09.337940  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:09.362347  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:09.386557  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:57:09.412140  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:09.436893  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:09.461871  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:09.487876  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:09.516060  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:09.541440  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:09.567069  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:57:09.585649  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:09.591504  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:09.602004  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605727  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605791  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.612679  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:09.622556  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:09.632414  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636379  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636441  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.643659  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:09.653893  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:09.663837  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667554  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667899  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.675833  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:09.686032  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:09.689851  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:09.689923  722351 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:09.690062  722351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 23:57:09.708774  722351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:57:09.718368  722351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:57:09.727825  722351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:57:09.727888  722351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:57:09.738106  722351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:57:09.738126  722351 kubeadm.go:157] found existing configuration files:
	
	I0916 23:57:09.738165  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:57:09.747962  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:57:09.748017  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:57:09.757385  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:57:09.766772  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:57:09.766839  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:57:09.775735  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.784848  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:57:09.784955  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.793751  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:57:09.803170  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:57:09.803229  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:57:09.811944  722351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:57:09.867145  722351 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:57:09.919246  722351 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:57:19.614241  722351 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:57:19.614308  722351 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:57:19.614466  722351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:57:19.614561  722351 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:57:19.614607  722351 kubeadm.go:310] OS: Linux
	I0916 23:57:19.614692  722351 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:57:19.614771  722351 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:57:19.614837  722351 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:57:19.614899  722351 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:57:19.614977  722351 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:57:19.615057  722351 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:57:19.615125  722351 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:57:19.615202  722351 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:57:19.615307  722351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:57:19.615454  722351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:57:19.615594  722351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:57:19.615688  722351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:57:19.618162  722351 out.go:252]   - Generating certificates and keys ...
	I0916 23:57:19.618260  722351 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:57:19.618349  722351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:57:19.618445  722351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:57:19.618533  722351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:57:19.618635  722351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:57:19.618717  722351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:57:19.618792  722351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:57:19.618993  722351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619071  722351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:57:19.619249  722351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619335  722351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:57:19.619434  722351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:57:19.619517  722351 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:57:19.619599  722351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:57:19.619679  722351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:57:19.619763  722351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:57:19.619846  722351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:57:19.619990  722351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:57:19.620069  722351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:57:19.620183  722351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:57:19.620281  722351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:57:19.621487  722351 out.go:252]   - Booting up control plane ...
	I0916 23:57:19.621595  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:57:19.621704  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:57:19.621799  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:57:19.621956  722351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:57:19.622047  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:57:19.622137  722351 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:57:19.622213  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:57:19.622246  722351 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:57:19.622371  722351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:57:19.622503  722351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:57:19.622564  722351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000941296s
	I0916 23:57:19.622663  722351 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:57:19.622778  722351 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:57:19.622893  722351 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:57:19.623021  722351 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:57:19.623126  722351 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.545161134s
	I0916 23:57:19.623210  722351 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.1638517s
	I0916 23:57:19.623273  722351 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001738286s
	I0916 23:57:19.623369  722351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:57:19.623478  722351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:57:19.623551  722351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:57:19.623792  722351 kubeadm.go:310] [mark-control-plane] Marking the node ha-198834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:57:19.623845  722351 kubeadm.go:310] [bootstrap-token] Using token: wg2on6.splp3qzu9xv61vdp
	I0916 23:57:19.625599  722351 out.go:252]   - Configuring RBAC rules ...
	I0916 23:57:19.625697  722351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:57:19.625769  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:57:19.625966  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:57:19.626123  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:57:19.626261  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:57:19.626367  722351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:57:19.626473  722351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:57:19.626522  722351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:57:19.626564  722351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:57:19.626570  722351 kubeadm.go:310] 
	I0916 23:57:19.626631  722351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:57:19.626643  722351 kubeadm.go:310] 
	I0916 23:57:19.626737  722351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:57:19.626747  722351 kubeadm.go:310] 
	I0916 23:57:19.626781  722351 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:57:19.626863  722351 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:57:19.626960  722351 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:57:19.626973  722351 kubeadm.go:310] 
	I0916 23:57:19.627050  722351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:57:19.627058  722351 kubeadm.go:310] 
	I0916 23:57:19.627113  722351 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:57:19.627119  722351 kubeadm.go:310] 
	I0916 23:57:19.627167  722351 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:57:19.627238  722351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:57:19.627297  722351 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:57:19.627302  722351 kubeadm.go:310] 
	I0916 23:57:19.627381  722351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:57:19.627449  722351 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:57:19.627454  722351 kubeadm.go:310] 
	I0916 23:57:19.627525  722351 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627618  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0916 23:57:19.627647  722351 kubeadm.go:310] 	--control-plane 
	I0916 23:57:19.627653  722351 kubeadm.go:310] 
	I0916 23:57:19.627725  722351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:57:19.627733  722351 kubeadm.go:310] 
	I0916 23:57:19.627801  722351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627921  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0916 23:57:19.627933  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:19.627939  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:19.630017  722351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:57:19.631017  722351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:57:19.635194  722351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:57:19.635211  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:57:19.655634  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:57:19.855102  722351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:57:19.855186  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:19.855265  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834 minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=true
	I0916 23:57:19.863538  722351 ops.go:34] apiserver oom_adj: -16
	I0916 23:57:19.931275  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.432025  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.932100  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.432105  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.932376  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.432213  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.931583  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.431392  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.932193  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.431927  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.504799  722351 kubeadm.go:1105] duration metric: took 4.649687278s to wait for elevateKubeSystemPrivileges
	I0916 23:57:24.504835  722351 kubeadm.go:394] duration metric: took 14.81493092s to StartCluster
	I0916 23:57:24.504858  722351 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.504967  722351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:57:24.505808  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.506080  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:57:24.506079  722351 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:24.506102  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.506120  722351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:57:24.506215  722351 addons.go:69] Setting storage-provisioner=true in profile "ha-198834"
	I0916 23:57:24.506241  722351 addons.go:238] Setting addon storage-provisioner=true in "ha-198834"
	I0916 23:57:24.506236  722351 addons.go:69] Setting default-storageclass=true in profile "ha-198834"
	I0916 23:57:24.506263  722351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198834"
	I0916 23:57:24.506271  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.506311  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:24.506630  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.506797  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.527476  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:24.528010  722351 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:57:24.528028  722351 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:57:24.528032  722351 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:57:24.528036  722351 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:57:24.528039  722351 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:57:24.528105  722351 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:57:24.528384  722351 addons.go:238] Setting addon default-storageclass=true in "ha-198834"
	I0916 23:57:24.528420  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.528683  722351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:57:24.528891  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.530050  722351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.530067  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:57:24.530109  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.548463  722351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.548490  722351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:57:24.548552  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.551711  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.575963  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.622716  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:57:24.680948  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.725959  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.815565  722351 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:57:25.027949  722351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:57:25.029176  722351 addons.go:514] duration metric: took 523.059617ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:57:25.029216  722351 start.go:246] waiting for cluster config update ...
	I0916 23:57:25.029233  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:25.030834  722351 out.go:203] 
	I0916 23:57:25.032180  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:25.032246  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.033846  722351 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0916 23:57:25.035651  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:25.036699  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:25.038502  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.038524  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:25.038599  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:25.038624  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:25.038635  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:25.038696  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.064556  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:25.064575  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:25.064593  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:25.064625  722351 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:25.064737  722351 start.go:364] duration metric: took 87.928µs to acquireMachinesLock for "ha-198834-m02"
	I0916 23:57:25.064767  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:25.064852  722351 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:57:25.067030  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:25.067261  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:25.067302  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:25.067392  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:25.067435  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067451  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067520  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:25.067544  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067561  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067817  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:25.087287  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0008ae780 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:25.087329  722351 kic.go:121] calculated static IP "192.168.49.3" for the "ha-198834-m02" container
	I0916 23:57:25.087390  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:25.104356  722351 cli_runner.go:164] Run: docker volume create ha-198834-m02 --label name.minikube.sigs.k8s.io=ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:25.128318  722351 oci.go:103] Successfully created a docker volume ha-198834-m02
	I0916 23:57:25.128423  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --entrypoint /usr/bin/test -v ha-198834-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:25.555443  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m02
	I0916 23:57:25.555486  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.555507  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:25.555574  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.769985  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214340138s)
	I0916 23:57:29.770025  722351 kic.go:203] duration metric: took 4.214511914s to extract preloaded images to volume ...
	W0916 23:57:29.770138  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.770180  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.770230  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.831280  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m02 --name ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m02 --network ha-198834 --ip 192.168.49.3 --volume ha-198834-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:30.118263  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Running}}
	I0916 23:57:30.140753  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.161053  722351 cli_runner.go:164] Run: docker exec ha-198834-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:30.204746  722351 oci.go:144] the created container "ha-198834-m02" has a running status.
	I0916 23:57:30.204782  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa...
	I0916 23:57:30.491277  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:30.491341  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:30.523169  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.546155  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:30.546178  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.603616  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.624695  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.624784  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.648569  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.648946  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.648966  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.800750  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.800784  722351 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0916 23:57:30.800873  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.822237  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.822505  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.822519  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0916 23:57:30.984206  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.984307  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.007082  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.007398  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.007430  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:31.152561  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:31.152598  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:31.152624  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:31.152644  722351 provision.go:84] configureAuth start
	I0916 23:57:31.152709  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:31.171931  722351 provision.go:143] copyHostCerts
	I0916 23:57:31.171978  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172008  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:31.172014  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172081  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:31.172159  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172181  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:31.172185  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172216  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:31.172262  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172279  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:31.172287  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172310  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:31.172361  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0916 23:57:31.314068  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:31.314146  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:31.314208  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.336792  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:31.442195  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:31.442269  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:31.472780  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:31.472841  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:31.499569  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:31.499653  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:31.530277  722351 provision.go:87] duration metric: took 377.61476ms to configureAuth
	I0916 23:57:31.530311  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:31.530528  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:31.530587  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.548573  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.548821  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.548841  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:31.695327  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:31.695357  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:31.695559  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:31.695639  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.715926  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.716269  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.716384  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:31.879960  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:31.880054  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.901465  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.901783  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.901817  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:33.107385  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:31.877658246 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:33.107432  722351 machine.go:96] duration metric: took 2.482713737s to provisionDockerMachine
	I0916 23:57:33.107448  722351 client.go:171] duration metric: took 8.040135103s to LocalClient.Create
	I0916 23:57:33.107471  722351 start.go:167] duration metric: took 8.040214449s to libmachine.API.Create "ha-198834"
	I0916 23:57:33.107480  722351 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0916 23:57:33.107493  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:33.107570  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:33.107624  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.129478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.235200  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:33.239799  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:33.239842  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:33.239854  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:33.239862  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:33.239881  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:33.239961  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:33.240070  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:33.240085  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:33.240211  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:33.252619  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:33.291135  722351 start.go:296] duration metric: took 183.636707ms for postStartSetup
	I0916 23:57:33.291600  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.313645  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:33.314041  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:33.314103  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.337314  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.439716  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:33.445408  722351 start.go:128] duration metric: took 8.380530846s to createHost
	I0916 23:57:33.445437  722351 start.go:83] releasing machines lock for "ha-198834-m02", held for 8.380681461s
	I0916 23:57:33.445500  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.469661  722351 out.go:179] * Found network options:
	I0916 23:57:33.471226  722351 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:33.472373  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:33.472429  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:33.472520  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:33.472550  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:33.472570  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.472621  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.495822  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.496478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.601441  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:33.704002  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:33.704085  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:33.742848  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:33.742881  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:33.742929  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:33.743066  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:33.765394  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:33.781702  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:33.796106  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:33.796186  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:33.811490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.825594  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:33.840006  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.853819  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:33.867424  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:33.882022  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:33.896562  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:33.910813  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:33.923436  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:33.936892  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.033978  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:34.137820  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:34.137955  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:34.138026  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:34.154788  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.170769  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:34.190397  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.207526  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:34.224333  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:34.249827  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:34.255532  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:34.270253  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:34.296311  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:34.391517  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:34.486390  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:34.486452  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:34.512957  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:34.529696  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.623612  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:35.389236  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:35.402665  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:35.418828  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.433733  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:35.524509  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:35.615815  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.688879  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:35.713552  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:35.729264  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.818355  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:35.908063  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.921416  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:35.921483  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:35.925600  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:35.925666  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:35.929510  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:35.970926  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:35.971002  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.001052  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.032731  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:36.033881  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:36.035387  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:36.055948  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:36.061767  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:36.076229  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:57:36.076482  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:36.076794  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:36.099199  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:36.099483  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0916 23:57:36.099498  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:36.099514  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.099667  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:36.099721  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:36.099735  722351 certs.go:256] generating profile certs ...
	I0916 23:57:36.099834  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:36.099867  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0916 23:57:36.099889  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:36.171638  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 ...
	I0916 23:57:36.171669  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4: {Name:mk274e4893d598b40c8fed777bc1c7c2e951159a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.171866  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 ...
	I0916 23:57:36.171885  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4: {Name:mkf2a66869f0c345fb28cc9925dc0bb02623a928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.172011  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:36.172195  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:36.172362  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:36.172381  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:36.172396  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:36.172415  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:36.172438  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:36.172457  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:36.172474  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:36.172493  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:36.172512  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:36.172589  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:36.172634  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:36.172648  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:36.172679  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:36.172703  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:36.172736  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:36.172796  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:36.172840  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.172861  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.172878  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.172963  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:36.194873  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:36.286293  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:36.291948  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:36.308150  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:36.312206  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:36.325598  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:36.329618  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:36.346110  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:36.350017  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:36.365628  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:36.369445  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:36.383675  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:36.387388  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:57:36.403394  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:36.432068  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:36.461592  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:36.491261  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:36.523895  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:36.552719  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:36.580284  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:36.608342  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:36.639670  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:36.672003  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:36.703856  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:36.734275  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:36.755638  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:36.777805  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:36.799338  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:36.821463  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:36.843600  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:57:36.867808  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:36.889233  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:36.896091  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:36.908363  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913145  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913212  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.921857  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:36.934186  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:36.945282  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949180  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949249  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.958068  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:36.970160  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:36.981053  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985350  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985410  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.993828  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:37.004616  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:37.008764  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:37.008830  722351 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0916 23:57:37.008961  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:37.008998  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:37.009050  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:37.026582  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:37.026656  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:37.026738  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:37.036867  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:37.036974  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:37.046606  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:57:37.070259  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:37.092325  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:37.116853  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:37.120789  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:37.137396  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:37.223494  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:37.256254  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:37.256574  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:37.256705  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:37.256762  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:37.278264  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:37.435308  722351 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:37.435366  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:54.013635  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.578241326s)
	I0916 23:57:54.013701  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:54.233708  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:57:54.308006  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:54.383356  722351 start.go:319] duration metric: took 17.126777498s to joinCluster
	I0916 23:57:54.383433  722351 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:54.383691  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:54.385020  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:54.386187  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:54.491315  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:54.505328  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:54.505398  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:54.505659  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508947  722351 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0916 23:57:56.508979  722351 node_ready.go:38] duration metric: took 2.003299323s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508998  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:56.509065  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:56.521258  722351 api_server.go:72] duration metric: took 2.137779117s to wait for apiserver process to appear ...
	I0916 23:57:56.521298  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:56.521326  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:56.527086  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:56.528055  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:56.528078  722351 api_server.go:131] duration metric: took 6.77168ms to wait for apiserver health ...
	I0916 23:57:56.528087  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:56.534412  722351 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:56.534478  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.534486  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.534497  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.534503  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.534515  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534524  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.534535  722351 system_pods.go:61] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534541  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.534547  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.534559  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.534564  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.534667  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.534716  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534725  722351 system_pods.go:61] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534731  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.534743  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.534748  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.534753  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.534758  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.534765  722351 system_pods.go:74] duration metric: took 6.672375ms to wait for pod list to return data ...
	I0916 23:57:56.534774  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:56.538351  722351 default_sa.go:45] found service account: "default"
	I0916 23:57:56.538385  722351 default_sa.go:55] duration metric: took 3.603096ms for default service account to be created ...
	I0916 23:57:56.538399  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:56.542274  722351 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:56.542301  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.542307  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.542311  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.542314  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.542321  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542325  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.542330  722351 system_pods.go:89] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542334  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.542338  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.542344  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.542347  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.542351  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.542356  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542367  722351 system_pods.go:89] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542371  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.542375  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.542377  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.542380  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.542384  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.542393  722351 system_pods.go:126] duration metric: took 3.988364ms to wait for k8s-apps to be running ...
	I0916 23:57:56.542403  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:56.542447  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:56.554466  722351 system_svc.go:56] duration metric: took 12.054188ms WaitForService to wait for kubelet
	I0916 23:57:56.554496  722351 kubeadm.go:578] duration metric: took 2.171026353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:56.554519  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:56.557501  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557532  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557552  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557557  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557561  722351 node_conditions.go:105] duration metric: took 3.037317ms to run NodePressure ...
	I0916 23:57:56.557575  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:56.557610  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:56.559549  722351 out.go:203] 
	I0916 23:57:56.561097  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:56.561232  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.562855  722351 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0916 23:57:56.563951  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:56.565051  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:56.566271  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:56.566290  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:56.566373  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:56.566383  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:56.566485  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:56.566581  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.586635  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:56.586656  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:56.586673  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:56.586704  722351 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:56.586811  722351 start.go:364] duration metric: took 87.391µs to acquireMachinesLock for "ha-198834-m03"
	I0916 23:57:56.586843  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:56.587003  722351 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:56.589063  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:56.589158  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:56.589187  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:56.589263  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:56.589299  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589313  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589365  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:56.589385  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589398  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589634  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:56.607248  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc001595440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:56.607297  722351 kic.go:121] calculated static IP "192.168.49.4" for the "ha-198834-m03" container
	I0916 23:57:56.607371  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:56.624198  722351 cli_runner.go:164] Run: docker volume create ha-198834-m03 --label name.minikube.sigs.k8s.io=ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:56.642183  722351 oci.go:103] Successfully created a docker volume ha-198834-m03
	I0916 23:57:56.642258  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --entrypoint /usr/bin/test -v ha-198834-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:57.021785  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m03
	I0916 23:57:57.021834  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:57.021864  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:57.021952  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:59.672995  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.650992477s)
	I0916 23:57:59.673039  722351 kic.go:203] duration metric: took 2.651177157s to extract preloaded images to volume ...
	W0916 23:57:59.673144  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:59.673190  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:59.673255  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:59.730169  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m03 --name ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m03 --network ha-198834 --ip 192.168.49.4 --volume ha-198834-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:58:00.013728  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Running}}
	I0916 23:58:00.034076  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.054832  722351 cli_runner.go:164] Run: docker exec ha-198834-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:58:00.109517  722351 oci.go:144] the created container "ha-198834-m03" has a running status.
	I0916 23:58:00.109546  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa...
	I0916 23:58:00.621029  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:58:00.621097  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:58:00.651614  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.673435  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:58:00.673460  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:58:00.730412  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.749865  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:58:00.750006  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.771445  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.771738  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.771754  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:58:00.920523  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:00.920553  722351 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0916 23:58:00.920616  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.940561  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.940837  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.940853  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0916 23:58:01.103101  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:01.103204  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:01.125182  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:01.125511  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:01.125543  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:58:01.275155  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:01.275201  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:58:01.275231  722351 ubuntu.go:190] setting up certificates
	I0916 23:58:01.275246  722351 provision.go:84] configureAuth start
	I0916 23:58:01.275318  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:01.296305  722351 provision.go:143] copyHostCerts
	I0916 23:58:01.296378  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296426  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:58:01.296439  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296527  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:58:01.296632  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296656  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:58:01.296682  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296726  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:58:01.296788  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296825  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:58:01.296835  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296924  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:58:01.297040  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0916 23:58:02.100987  722351 provision.go:177] copyRemoteCerts
	I0916 23:58:02.101048  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:58:02.101084  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.119475  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:02.218802  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:58:02.218870  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:58:02.251628  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:58:02.251700  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:58:02.279052  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:58:02.279124  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:58:02.305168  722351 provision.go:87] duration metric: took 1.029902032s to configureAuth
	I0916 23:58:02.305208  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:58:02.305440  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:02.305491  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.322139  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.322413  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.322428  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:58:02.459594  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:58:02.459629  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:58:02.459746  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:58:02.459804  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.476657  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.476985  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.477099  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:58:02.633394  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:58:02.633489  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.651145  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.651390  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.651410  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:58:03.800032  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:58:02.631485455 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:58:03.800077  722351 machine.go:96] duration metric: took 3.050188223s to provisionDockerMachine
	I0916 23:58:03.800094  722351 client.go:171] duration metric: took 7.210891992s to LocalClient.Create
	I0916 23:58:03.800121  722351 start.go:167] duration metric: took 7.210962522s to libmachine.API.Create "ha-198834"
	I0916 23:58:03.800131  722351 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0916 23:58:03.800155  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:58:03.800229  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:58:03.800295  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.817949  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:03.918038  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:58:03.922382  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:58:03.922420  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:58:03.922430  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:58:03.922438  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:58:03.922452  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:58:03.922512  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:58:03.922607  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:58:03.922620  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:58:03.922727  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:58:03.932298  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:03.961387  722351 start.go:296] duration metric: took 161.230642ms for postStartSetup
	I0916 23:58:03.961811  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:03.979123  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:58:03.979395  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:58:03.979437  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.997520  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.091253  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:58:04.096537  722351 start.go:128] duration metric: took 7.509514126s to createHost
	I0916 23:58:04.096585  722351 start.go:83] releasing machines lock for "ha-198834-m03", held for 7.509743952s
	I0916 23:58:04.096660  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:04.115702  722351 out.go:179] * Found network options:
	I0916 23:58:04.117029  722351 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:58:04.118232  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118256  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118281  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118299  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:58:04.118395  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:58:04.118441  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.118449  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:58:04.118515  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.136875  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.137594  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.231418  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:58:04.311016  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:58:04.311108  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:58:04.340810  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:58:04.340841  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.340871  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.340997  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.359059  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:58:04.371794  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:58:04.383345  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:58:04.383421  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:58:04.394513  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.405081  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:58:04.415653  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.426510  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:58:04.436405  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:58:04.447135  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:58:04.457926  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:58:04.469563  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:58:04.478599  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:58:04.488307  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:04.557785  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:58:04.636805  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.636855  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.636899  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:58:04.649865  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.662323  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:58:04.680711  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.693319  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:58:04.705665  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.723842  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:58:04.727547  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:58:04.738845  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:58:04.758974  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:58:04.830471  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:58:04.900429  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:58:04.900482  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:58:04.920093  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:58:04.931599  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:05.002855  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:58:05.807532  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:58:05.819728  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:58:05.832303  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:05.844347  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:58:05.916277  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:58:05.988520  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.055206  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:58:06.080490  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:58:06.092817  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.162707  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:58:06.248276  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:06.261931  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:58:06.262000  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:58:06.265868  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:58:06.265941  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:58:06.269385  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:58:06.305058  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:58:06.305139  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.331725  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.358446  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:58:06.359714  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:58:06.360964  722351 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:58:06.362187  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:58:06.379025  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:58:06.383173  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:06.394963  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:58:06.395208  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:06.395415  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:58:06.412700  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:06.412979  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0916 23:58:06.412992  722351 certs.go:194] generating shared ca certs ...
	I0916 23:58:06.413008  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:06.413150  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:58:06.413202  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:58:06.413213  722351 certs.go:256] generating profile certs ...
	I0916 23:58:06.413290  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:58:06.413316  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0916 23:58:06.413331  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:58:07.059616  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 ...
	I0916 23:58:07.059648  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783: {Name:mka6f3e20ae0db98330bce12c7c53c8ceb029f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.059850  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 ...
	I0916 23:58:07.059873  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783: {Name:mk88fba5116449476945068bb066a5fae095ca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.060019  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:58:07.060173  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:58:07.060303  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:58:07.060320  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:58:07.060332  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:58:07.060346  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:58:07.060359  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:58:07.060371  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:58:07.060383  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:58:07.060395  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:58:07.060407  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:58:07.060462  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:58:07.060492  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:58:07.060502  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:58:07.060525  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:58:07.060546  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:58:07.060571  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:58:07.060609  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:07.060634  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.060648  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.060666  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.060725  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:07.077675  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:07.167227  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:58:07.171339  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:58:07.184631  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:58:07.188345  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:58:07.201195  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:58:07.204727  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:58:07.217344  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:58:07.220977  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:58:07.233804  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:58:07.237296  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:58:07.250936  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:58:07.254504  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:58:07.267513  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:58:07.293250  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:58:07.319357  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:58:07.345045  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:58:07.370793  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:58:07.397411  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:58:07.422329  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:58:07.447186  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:58:07.472564  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:58:07.500373  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:58:07.526598  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:58:07.552426  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:58:07.570062  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:58:07.589628  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:58:07.609486  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:58:07.630629  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:58:07.650280  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:58:07.669308  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:58:07.687700  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:58:07.694681  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:58:07.705784  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709662  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709739  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.716649  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:58:07.726290  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:58:07.736118  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740041  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740101  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.747081  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:58:07.757480  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:58:07.767310  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771054  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771114  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.778013  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:58:07.788245  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:58:07.792058  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:58:07.792123  722351 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0916 23:58:07.792232  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:58:07.792263  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:58:07.792307  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:58:07.805180  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:58:07.805247  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:58:07.805296  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:58:07.814610  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:58:07.814678  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:58:07.825352  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:58:07.844047  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:58:07.862757  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:58:07.883848  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:58:07.887562  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:07.899646  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:07.974384  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:08.004718  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:08.005001  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:08.005124  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:58:08.005169  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:08.024622  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:08.169785  722351 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:08.169853  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:58:25.708852  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (17.538975369s)
	I0916 23:58:25.708884  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:58:25.930343  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m03 minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:58:26.006016  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:58:26.089408  722351 start.go:319] duration metric: took 18.084403561s to joinCluster
	I0916 23:58:26.089494  722351 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:26.089805  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:26.091004  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:58:26.092246  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:26.200675  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:26.214424  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:58:26.214506  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:58:26.214713  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	W0916 23:58:28.218137  722351 node_ready.go:57] node "ha-198834-m03" has "Ready":"False" status (will retry)
	I0916 23:58:29.718579  722351 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0916 23:58:29.718621  722351 node_ready.go:38] duration metric: took 3.503891029s for node "ha-198834-m03" to be "Ready" ...
	I0916 23:58:29.718640  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:58:29.718688  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:58:29.730821  722351 api_server.go:72] duration metric: took 3.641289304s to wait for apiserver process to appear ...
	I0916 23:58:29.730847  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:58:29.730870  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:58:29.736447  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:58:29.737363  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:58:29.737382  722351 api_server.go:131] duration metric: took 6.528439ms to wait for apiserver health ...
	I0916 23:58:29.737390  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:58:29.743125  722351 system_pods.go:59] 27 kube-system pods found
	I0916 23:58:29.743154  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.743159  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.743162  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.743166  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.743169  722351 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.743172  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.743179  722351 system_pods.go:61] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743182  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.743189  722351 system_pods.go:61] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743193  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.743198  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.743202  722351 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.743206  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.743209  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.743212  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.743216  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.743220  722351 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743227  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.743231  722351 system_pods.go:61] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743236  722351 system_pods.go:61] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743241  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.743245  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.743248  722351 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.743251  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.743254  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.743257  722351 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.743260  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.743267  722351 system_pods.go:74] duration metric: took 5.871633ms to wait for pod list to return data ...
	I0916 23:58:29.743275  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:58:29.746038  722351 default_sa.go:45] found service account: "default"
	I0916 23:58:29.746059  722351 default_sa.go:55] duration metric: took 2.77496ms for default service account to be created ...
	I0916 23:58:29.746067  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:58:29.751428  722351 system_pods.go:86] 27 kube-system pods found
	I0916 23:58:29.751454  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.751459  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.751463  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.751466  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.751469  722351 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.751472  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.751478  722351 system_pods.go:89] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751482  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.751490  722351 system_pods.go:89] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751494  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.751498  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.751501  722351 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.751504  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.751508  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.751512  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.751515  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.751520  722351 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751526  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.751530  722351 system_pods.go:89] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751535  722351 system_pods.go:89] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751540  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.751545  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.751550  722351 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.751554  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.751558  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.751563  722351 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.751569  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.751577  722351 system_pods.go:126] duration metric: took 5.505301ms to wait for k8s-apps to be running ...
	I0916 23:58:29.751587  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:58:29.751637  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:58:29.764067  722351 system_svc.go:56] duration metric: took 12.467532ms WaitForService to wait for kubelet
	I0916 23:58:29.764102  722351 kubeadm.go:578] duration metric: took 3.674577242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:58:29.764127  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:58:29.767676  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767699  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767712  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767717  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767721  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767724  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767728  722351 node_conditions.go:105] duration metric: took 3.595861ms to run NodePressure ...
	I0916 23:58:29.767739  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:58:29.767761  722351 start.go:255] writing updated cluster config ...
	I0916 23:58:29.768076  722351 ssh_runner.go:195] Run: rm -f paused
	I0916 23:58:29.772054  722351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:29.772528  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:58:29.776391  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781517  722351 pod_ready.go:94] pod "coredns-66bc5c9577-5wx4k" is "Ready"
	I0916 23:58:29.781544  722351 pod_ready.go:86] duration metric: took 5.128752ms for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781552  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.786524  722351 pod_ready.go:94] pod "coredns-66bc5c9577-mjbz6" is "Ready"
	I0916 23:58:29.786549  722351 pod_ready.go:86] duration metric: took 4.991527ms for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.789148  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793593  722351 pod_ready.go:94] pod "etcd-ha-198834" is "Ready"
	I0916 23:58:29.793614  722351 pod_ready.go:86] duration metric: took 4.43654ms for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793622  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797833  722351 pod_ready.go:94] pod "etcd-ha-198834-m02" is "Ready"
	I0916 23:58:29.797856  722351 pod_ready.go:86] duration metric: took 4.228462ms for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797864  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.974055  722351 request.go:683] "Waited before sending request" delay="176.0853ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.173047  722351 request.go:683] "Waited before sending request" delay="193.205885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.373324  722351 request.go:683] "Waited before sending request" delay="74.260595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.573189  722351 request.go:683] "Waited before sending request" delay="196.187075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.973960  722351 request.go:683] "Waited before sending request" delay="171.749825ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.977519  722351 pod_ready.go:94] pod "etcd-ha-198834-m03" is "Ready"
	I0916 23:58:30.977548  722351 pod_ready.go:86] duration metric: took 1.179678858s for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.172996  722351 request.go:683] "Waited before sending request" delay="195.270589ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:58:31.176896  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.373184  722351 request.go:683] "Waited before sending request" delay="196.155083ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834"
	I0916 23:58:31.573091  722351 request.go:683] "Waited before sending request" delay="196.292532ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:31.576254  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834" is "Ready"
	I0916 23:58:31.576280  722351 pod_ready.go:86] duration metric: took 399.33205ms for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.576288  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.773718  722351 request.go:683] "Waited before sending request" delay="197.34633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m02"
	I0916 23:58:31.973716  722351 request.go:683] "Waited before sending request" delay="196.477986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:31.978504  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m02" is "Ready"
	I0916 23:58:31.978555  722351 pod_ready.go:86] duration metric: took 402.258846ms for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.978567  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.172964  722351 request.go:683] "Waited before sending request" delay="194.26238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m03"
	I0916 23:58:32.373491  722351 request.go:683] "Waited before sending request" delay="197.345263ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:32.376525  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m03" is "Ready"
	I0916 23:58:32.376552  722351 pod_ready.go:86] duration metric: took 397.9768ms for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.573017  722351 request.go:683] "Waited before sending request" delay="196.299414ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:58:32.577487  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.773969  722351 request.go:683] "Waited before sending request" delay="196.341624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834"
	I0916 23:58:32.973585  722351 request.go:683] "Waited before sending request" delay="196.346276ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:32.977689  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834" is "Ready"
	I0916 23:58:32.977721  722351 pod_ready.go:86] duration metric: took 400.206125ms for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.977735  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.173032  722351 request.go:683] "Waited before sending request" delay="195.180271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m02"
	I0916 23:58:33.373811  722351 request.go:683] "Waited before sending request" delay="197.350717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:33.376722  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m02" is "Ready"
	I0916 23:58:33.376747  722351 pod_ready.go:86] duration metric: took 399.004052ms for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.376756  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.573048  722351 request.go:683] "Waited before sending request" delay="196.186349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m03"
	I0916 23:58:33.773733  722351 request.go:683] "Waited before sending request" delay="197.347012ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:33.776944  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m03" is "Ready"
	I0916 23:58:33.776972  722351 pod_ready.go:86] duration metric: took 400.209131ms for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.973425  722351 request.go:683] "Waited before sending request" delay="196.344301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:58:33.977203  722351 pod_ready.go:83] waiting for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.173688  722351 request.go:683] "Waited before sending request" delay="196.345801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tkhn"
	I0916 23:58:34.373026  722351 request.go:683] "Waited before sending request" delay="196.256084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:34.376079  722351 pod_ready.go:94] pod "kube-proxy-5tkhn" is "Ready"
	I0916 23:58:34.376106  722351 pod_ready.go:86] duration metric: took 398.875647ms for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.376114  722351 pod_ready.go:83] waiting for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.573402  722351 request.go:683] "Waited before sending request" delay="197.174223ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:34.773022  722351 request.go:683] "Waited before sending request" delay="196.289258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:34.973958  722351 request.go:683] "Waited before sending request" delay="97.260541ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:35.173637  722351 request.go:683] "Waited before sending request" delay="196.407064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.573487  722351 request.go:683] "Waited before sending request" delay="193.254271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.973307  722351 request.go:683] "Waited before sending request" delay="93.259111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	W0916 23:58:36.383328  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:38.882062  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:40.882520  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:42.883194  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:45.382843  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:47.882744  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:49.882993  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:51.883265  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:54.383005  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:56.882555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:59.382463  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:01.382897  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:03.883583  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:06.382581  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:08.882275  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:11.382224  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:13.382333  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:15.882727  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:18.383800  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:20.882547  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:22.883081  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:25.383627  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:27.882377  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:29.882787  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:31.884042  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:34.382932  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:36.882730  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:38.882959  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:40.883411  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:43.382771  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:45.882938  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:48.381607  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:50.382229  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:52.382889  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:54.882546  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:56.882802  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:58.882939  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:00.883550  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:03.382872  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:05.383021  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:07.384166  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:09.883064  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:11.884141  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:14.383248  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:16.883441  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:18.884438  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:21.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:23.883713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:26.383093  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:28.883552  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:31.383392  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:33.883626  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:35.883823  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:38.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:40.883430  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:43.383026  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:45.883091  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:48.382865  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:50.882713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:52.882989  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:55.383076  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:57.383555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:59.882704  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:01.883495  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:04.382406  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:06.383424  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:08.883456  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:11.382988  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:13.882379  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:15.883651  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:18.382551  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:20.382997  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:22.882943  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:24.883256  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:27.383660  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:29.882955  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:32.383364  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	I0917 00:01:34.382530  722351 pod_ready.go:94] pod "kube-proxy-d8brp" is "Ready"
	I0917 00:01:34.382562  722351 pod_ready.go:86] duration metric: took 3m0.006439942s for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.382572  722351 pod_ready.go:83] waiting for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.387645  722351 pod_ready.go:94] pod "kube-proxy-h2fxd" is "Ready"
	I0917 00:01:34.387677  722351 pod_ready.go:86] duration metric: took 5.098826ms for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.390707  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396086  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834" is "Ready"
	I0917 00:01:34.396115  722351 pod_ready.go:86] duration metric: took 5.379692ms for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396126  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400646  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m02" is "Ready"
	I0917 00:01:34.400670  722351 pod_ready.go:86] duration metric: took 4.536355ms for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400680  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.577209  722351 request.go:683] "Waited before sending request" delay="174.117357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0917 00:01:34.580767  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m03" is "Ready"
	I0917 00:01:34.580796  722351 pod_ready.go:86] duration metric: took 180.109317ms for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.580808  722351 pod_ready.go:40] duration metric: took 3m4.808720134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:34.629691  722351 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:01:34.631405  722351 out.go:179] * Done! kubectl is now configured to use "ha-198834" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50aecbe9f874a63c5159d55af06211bca7903e623f01f1e603f267caaf6da9a7/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.259744438Z" level=info msg="ignoring event" container=fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.275867775Z" level=info msg="ignoring event" container=64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.320870537Z" level=info msg="ignoring event" container=310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.336829292Z" level=info msg="ignoring event" container=a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687384709Z" level=info msg="ignoring event" container=11889e34950f849cf7805c6d56f1957ad9d5af727f4810f2da728671398b9f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687719889Z" level=info msg="ignoring event" container=1ccdf9f33d5601763297f230a2f6e51620db2ed183e9f4b9179f4ccef579dfac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756623723Z" level=info msg="ignoring event" container=bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756673284Z" level=info msg="ignoring event" container=870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:01:36 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:01:37 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:37Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	1ccdf9f33d560       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   bf6d6b59f2413       coredns-66bc5c9577-mjbz6
	11889e34950f8       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   870758f308362       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              6 minutes ago       Running             kindnet-cni               0                   f541f878be896       kindnet-h28vp
	b16ddbbc469c5       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   50aecbe9f874a       storage-provisioner
	2da683f529549       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	8a32665f7e3e4       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     7 minutes ago       Running             kube-vip                  0                   5e4aed7a38e18       kube-vip-ha-198834
	4f536df8f44eb       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [11889e34950f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50107 - 45856 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000165011s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50484 - 7509 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000096464s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [1ccdf9f33d56] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49262 - 38359 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000112146s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:51442 - 41164 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000125545s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	
	
	==> coredns [f4f7ea59034e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:04:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3525bf030f0d49c1ab057441433c477c
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m58s
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m58s
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m4s
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m58s
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m57s  kube-proxy       
	  Normal  Starting                 7m4s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m4s   kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m4s   kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m4s   kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m59s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m30s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m59s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:04:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:57:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 35caf7934a824e33949ce426f7316bfd
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m26s
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m29s
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m22s  kube-proxy       
	  Normal  RegisteredNode  6m25s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  6m24s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode  5m59s  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	Name:               ha-198834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:04:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-198834-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4e7dc065e4fa49595825994457b8e
	  System UUID:                6f810798-3461-44d1-91c3-d55b483ec842
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l2jn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 etcd-ha-198834-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m53s
	  kube-system                 kindnet-67fn9                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m48s
	  kube-system                 kube-apiserver-ha-198834-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-controller-manager-ha-198834-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-proxy-d8brp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-scheduler-ha-198834-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-vip-ha-198834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3s     kube-proxy       
	  Normal  RegisteredNode  5m55s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  5m54s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  5m54s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"info","ts":"2025-09-16T23:58:12.736369Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-16T23:58:12.759123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:34222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:58:12.760774Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(5981864578030751937 12593026477526642892 12956928539845794953)"}
	{"level":"info","ts":"2025-09-16T23:58:12.760967Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:12.761007Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-16T23:58:19.991223Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:25.496900Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:30.072550Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:32.068856Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:40.123997Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-16T23:58:42.678047Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","bytes":1393601,"size":"1.4 MB","took":"30.013494343s"}
	{"level":"info","ts":"2025-09-17T00:03:27.515545Z","caller":"traceutil/trace.go:172","msg":"trace[429348455] transaction","detail":"{read_only:false; response_revision:1816; number_of_response:1; }","duration":"111.335739ms","start":"2025-09-17T00:03:27.404190Z","end":"2025-09-17T00:03:27.515525Z","steps":["trace[429348455] 'process raft request'  (duration: 111.14691ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:03:45.321237Z","caller":"traceutil/trace.go:172","msg":"trace[1168397664] transaction","detail":"{read_only:false; response_revision:1860; number_of_response:1; }","duration":"125.134331ms","start":"2025-09-17T00:03:45.196084Z","end":"2025-09-17T00:03:45.321218Z","steps":["trace[1168397664] 'process raft request'  (duration: 124.989711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:03:45.959335Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.771431ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040017681051689 > lease_revoke:<id:50c19954f670abb9>","response":"size:29"}
	{"level":"info","ts":"2025-09-17T00:03:45.960220Z","caller":"traceutil/trace.go:172","msg":"trace[1051336348] linearizableReadLoop","detail":"{readStateIndex:2294; appliedIndex:2293; }","duration":"253.51671ms","start":"2025-09-17T00:03:45.706683Z","end":"2025-09-17T00:03:45.960199Z","steps":["trace[1051336348] 'read index received'  (duration: 352.53µs)","trace[1051336348] 'applied index is now lower than readState.Index'  (duration: 253.162091ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:03:45.960342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"293.914233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:03:45.960374Z","caller":"traceutil/trace.go:172","msg":"trace[305973442] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:1862; }","duration":"293.967568ms","start":"2025-09-17T00:03:45.666397Z","end":"2025-09-17T00:03:45.960365Z","steps":["trace[305973442] 'agreement among raft nodes before linearized reading'  (duration: 293.876046ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:03:45.960547Z","caller":"traceutil/trace.go:172","msg":"trace[2000303218] transaction","detail":"{read_only:false; response_revision:1863; number_of_response:1; }","duration":"248.094618ms","start":"2025-09-17T00:03:45.712439Z","end":"2025-09-17T00:03:45.960534Z","steps":["trace[2000303218] 'process raft request'  (duration: 247.028417ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:04:17.368491Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:04:17.369209Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:04:17.372689Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5303da23f403d0c1","error":"failed to dial 5303da23f403d0c1 on stream Message (EOF)"}
	{"level":"warn","ts":"2025-09-17T00:04:17.591691Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"warn","ts":"2025-09-17T00:04:18.021943Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"warn","ts":"2025-09-17T00:04:18.861357Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:18.861427Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:04:22 up  2:46,  0 users,  load average: 2.04, 1.46, 1.16
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:03:40.423692       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:50.420798       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:03:50.420829       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:03:50.421086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:50.421118       1 main.go:301] handling current node
	I0917 00:03:50.421132       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:03:50.421136       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:04:00.426015       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:00.426065       1 main.go:301] handling current node
	I0917 00:04:00.426087       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:04:00.426094       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:04:00.426329       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:04:00.426343       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:04:10.425292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:10.425351       1 main.go:301] handling current node
	I0917 00:04:10.425377       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:04:10.425384       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:04:10.425806       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:04:10.425835       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:04:20.419316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:20.419345       1 main.go:301] handling current node
	I0917 00:04:20.419360       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:04:20.419368       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:04:20.419604       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:04:20.419620       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0916 23:57:24.200277       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:57:24.242655       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0916 23:58:29.048843       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:34.361323       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:36.632983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:02.667929       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:58.976838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:19.218755       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:15.644338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:43.338268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:03:18.851078       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58262: use of closed network connection
	E0917 00:03:19.024113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58282: use of closed network connection
	E0917 00:03:19.194951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58306: use of closed network connection
	E0917 00:03:19.388722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58332: use of closed network connection
	E0917 00:03:19.557698       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58342: use of closed network connection
	E0917 00:03:19.744687       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58348: use of closed network connection
	E0917 00:03:19.919836       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58362: use of closed network connection
	E0917 00:03:20.087518       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58376: use of closed network connection
	E0917 00:03:20.254024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58398: use of closed network connection
	E0917 00:03:22.459781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48968: use of closed network connection
	E0917 00:03:22.632160       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48992: use of closed network connection
	E0917 00:03:22.799975       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:49024: use of closed network connection
	I0917 00:03:39.352525       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:47.239226       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0917 00:04:17.941970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.036759       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.036813       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5897933c-61bc-4eef-8922-66c37ba68c57(kube-system/kindnet-rwc59) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	E0916 23:58:30.036834       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	I0916 23:58:30.038109       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.048424       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:30.048665       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4edbf3a1-360c-4f5c-81a3-aa63deb9a159(kube-system/kindnet-lpn5v) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	
	
	==> kubelet <==
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349086    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51d39f-7e43-461b-a021-13ddf0cb9845-lib-modules\") pod \"kindnet-h28vp\" (UID: \"6c51d39f-7e43-461b-a021-13ddf0cb9845\") " pod="kube-system/kindnet-h28vp"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349103    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-xtables-lock\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349123    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n49\" (UniqueName: \"kubernetes.io/projected/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-kube-api-access-84n49\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650251    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-config-volume\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650425    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5ns\" (UniqueName: \"kubernetes.io/projected/c918625f-be11-44bf-8b82-d4c21b8993d1-kube-api-access-th5ns\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650660    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c918625f-be11-44bf-8b82-d4c21b8993d1-config-volume\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650701    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmb4\" (UniqueName: \"kubernetes.io/projected/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-kube-api-access-xhmb4\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.014693    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tkhn" podStartSLOduration=1.014665687 podStartE2EDuration="1.014665687s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:24.932304069 +0000 UTC m=+6.176281069" watchObservedRunningTime="2025-09-16 23:57:25.014665687 +0000 UTC m=+6.258642688"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.042478    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.046332    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f541f878be89694936d8219d8e7fc682a8a169d9edf6417f067927aa4748c0ae"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153403    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrvp\" (UniqueName: \"kubernetes.io/projected/6b6f64f3-2647-4e13-be41-47fcc6111f3e-kube-api-access-jqrvp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153458    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b6f64f3-2647-4e13-be41-47fcc6111f3e-tmp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098005    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5wx4k" podStartSLOduration=2.097979793 podStartE2EDuration="2.097979793s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.086842117 +0000 UTC m=+7.330819118" watchObservedRunningTime="2025-09-16 23:57:26.097979793 +0000 UTC m=+7.341956793"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098130    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098124108 podStartE2EDuration="1.098124108s" podCreationTimestamp="2025-09-16 23:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.097817254 +0000 UTC m=+7.341794256" watchObservedRunningTime="2025-09-16 23:57:26.098124108 +0000 UTC m=+7.342101108"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.159968    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mjbz6" podStartSLOduration=5.159946005 podStartE2EDuration="5.159946005s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.124330373 +0000 UTC m=+7.368307374" watchObservedRunningTime="2025-09-16 23:57:29.159946005 +0000 UTC m=+10.403923006"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.193262    2468 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.194144    2468 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 23:57:30 ha-198834 kubelet[2468]: I0916 23:57:30.158085    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h28vp" podStartSLOduration=1.342825895 podStartE2EDuration="6.158061718s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="2025-09-16 23:57:24.955662014 +0000 UTC m=+6.199639012" lastFinishedPulling="2025-09-16 23:57:29.770897851 +0000 UTC m=+11.014874835" observedRunningTime="2025-09-16 23:57:30.157595407 +0000 UTC m=+11.401572408" watchObservedRunningTime="2025-09-16 23:57:30.158061718 +0000 UTC m=+11.402038720"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.230434    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.258365    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370599    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370662    2468 scope.go:117] "RemoveContainer" containerID="fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.388953    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.389033    2468 scope.go:117] "RemoveContainer" containerID="64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea"
	Sep 17 00:01:35 ha-198834 kubelet[2468]: I0917 00:01:35.703764    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5r6\" (UniqueName: \"kubernetes.io/projected/a7cf1231-2a12-4247-a01a-2c2f02f5f2d8-kube-api-access-vt5r6\") pod \"busybox-7b57f96db7-pstjp\" (UID: \"a7cf1231-2a12-4247-a01a-2c2f02f5f2d8\") " pod="default/busybox-7b57f96db7-pstjp"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (13.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (88.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 node start m02 --alsologtostderr -v 5: (36.880904814s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (843.877017ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:00.917715  760997 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:00.917896  760997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:00.917921  760997 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:00.917927  760997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:00.918168  760997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:00.918373  760997 out.go:368] Setting JSON to false
	I0917 00:05:00.918397  760997 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:00.918510  760997 notify.go:220] Checking for updates...
	I0917 00:05:00.919008  760997 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:00.919042  760997 status.go:174] checking status of ha-198834 ...
	I0917 00:05:00.919595  760997 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:00.941709  760997 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:00.941746  760997 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:00.942121  760997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:00.964716  760997 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:00.964993  760997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:00.965035  760997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:00.985303  760997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:01.083512  760997 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:01.089058  760997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:01.104853  760997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:01.171284  760997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:01.159484696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:01.172274  760997 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:01.172320  760997 api_server.go:166] Checking apiserver status ...
	I0917 00:05:01.172369  760997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:01.188567  760997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:01.200553  760997 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:01.200631  760997 ssh_runner.go:195] Run: ls
	I0917 00:05:01.205348  760997 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:01.210569  760997 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:01.210599  760997 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:01.210612  760997 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:01.210638  760997 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:01.211038  760997 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:01.231050  760997 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:01.231075  760997 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:01.231405  760997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:01.255580  760997 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:01.255931  760997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:01.255986  760997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:01.275893  760997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:01.384825  760997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:01.398639  760997 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:01.398676  760997 api_server.go:166] Checking apiserver status ...
	I0917 00:05:01.398730  760997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:01.410578  760997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:01.420989  760997 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:01.421049  760997 ssh_runner.go:195] Run: ls
	I0917 00:05:01.425031  760997 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:01.429333  760997 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:01.429358  760997 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:01.429367  760997 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:01.429388  760997 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:01.429621  760997 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:01.447629  760997 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:01.447662  760997 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:01.447991  760997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:01.470063  760997 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:01.470441  760997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:01.470498  760997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:01.492394  760997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:01.594054  760997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:01.623342  760997 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:01.623370  760997 api_server.go:166] Checking apiserver status ...
	I0917 00:05:01.623413  760997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:01.636739  760997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:01.652316  760997 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:01.652458  760997 ssh_runner.go:195] Run: ls
	I0917 00:05:01.662271  760997 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:01.668853  760997 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:01.668886  760997 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:01.668898  760997 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:01.668936  760997 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:01.669288  760997 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:01.696955  760997 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:01.696983  760997 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:01.696991  760997 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:01.703661  665399 retry.go:31] will retry after 913.837654ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (753.543149ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:02.662575  761336 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:02.662676  761336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:02.662684  761336 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:02.662688  761336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:02.662879  761336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:02.663088  761336 out.go:368] Setting JSON to false
	I0917 00:05:02.663112  761336 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:02.663238  761336 notify.go:220] Checking for updates...
	I0917 00:05:02.663519  761336 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:02.663548  761336 status.go:174] checking status of ha-198834 ...
	I0917 00:05:02.664170  761336 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:02.684688  761336 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:02.684712  761336 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:02.685022  761336 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:02.702675  761336 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:02.703034  761336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:02.703104  761336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:02.720207  761336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:02.817724  761336 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:02.823440  761336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:02.838464  761336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:02.904923  761336 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:02.892941574 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:02.905681  761336 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:02.905722  761336 api_server.go:166] Checking apiserver status ...
	I0917 00:05:02.905776  761336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:02.920863  761336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:02.932802  761336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:02.932872  761336 ssh_runner.go:195] Run: ls
	I0917 00:05:02.937285  761336 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:02.942216  761336 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:02.942251  761336 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:02.942267  761336 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:02.942300  761336 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:02.942598  761336 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:02.962350  761336 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:02.962396  761336 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:02.962736  761336 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:02.982998  761336 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:02.983342  761336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:02.983401  761336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:03.003804  761336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:03.101582  761336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:03.114278  761336 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:03.114313  761336 api_server.go:166] Checking apiserver status ...
	I0917 00:05:03.114359  761336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:03.126378  761336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:03.138060  761336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:03.138145  761336 ssh_runner.go:195] Run: ls
	I0917 00:05:03.142176  761336 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:03.146792  761336 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:03.146823  761336 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:03.146836  761336 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:03.146867  761336 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:03.147255  761336 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:03.167465  761336 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:03.167492  761336 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:03.167831  761336 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:03.186692  761336 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:03.187089  761336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:03.187163  761336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:03.205738  761336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:03.301614  761336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:03.315084  761336 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:03.315116  761336 api_server.go:166] Checking apiserver status ...
	I0917 00:05:03.315170  761336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:03.327315  761336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:03.338122  761336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:03.338191  761336 ssh_runner.go:195] Run: ls
	I0917 00:05:03.342143  761336 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:03.346258  761336 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:03.346281  761336 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:03.346291  761336 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:03.346306  761336 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:03.346613  761336 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:03.364979  761336 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:03.365004  761336 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:03.365013  761336 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:03.372072  665399 retry.go:31] will retry after 1.333177088s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (707.044468ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:04.750825  761578 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:04.751137  761578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:04.751147  761578 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:04.751151  761578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:04.751345  761578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:04.751523  761578 out.go:368] Setting JSON to false
	I0917 00:05:04.751550  761578 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:04.751706  761578 notify.go:220] Checking for updates...
	I0917 00:05:04.752004  761578 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:04.752033  761578 status.go:174] checking status of ha-198834 ...
	I0917 00:05:04.752480  761578 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:04.774095  761578 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:04.774132  761578 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:04.774499  761578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:04.793073  761578 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:04.793334  761578 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:04.793385  761578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:04.811167  761578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:04.904406  761578 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:04.909032  761578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:04.920796  761578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:04.972472  761578 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:04.96350689 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:04.973054  761578 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:04.973088  761578 api_server.go:166] Checking apiserver status ...
	I0917 00:05:04.973138  761578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:04.985849  761578 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:04.996332  761578 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:04.996404  761578 ssh_runner.go:195] Run: ls
	I0917 00:05:05.000249  761578 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:05.006039  761578 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:05.006068  761578 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:05.006079  761578 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:05.006094  761578 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:05.006330  761578 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:05.022324  761578 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:05.022351  761578 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:05.022640  761578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:05.039544  761578 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:05.039902  761578 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:05.039971  761578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:05.056544  761578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:05.148992  761578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:05.161017  761578 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:05.161050  761578 api_server.go:166] Checking apiserver status ...
	I0917 00:05:05.161092  761578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:05.172286  761578 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:05.181811  761578 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:05.181869  761578 ssh_runner.go:195] Run: ls
	I0917 00:05:05.185610  761578 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:05.190097  761578 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:05.190123  761578 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:05.190136  761578 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:05.190156  761578 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:05.190461  761578 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:05.208732  761578 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:05.208767  761578 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:05.209089  761578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:05.225874  761578 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:05.226158  761578 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:05.226198  761578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:05.242702  761578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:05.337238  761578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:05.355170  761578 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:05.355208  761578 api_server.go:166] Checking apiserver status ...
	I0917 00:05:05.355268  761578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:05.370509  761578 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:05.382816  761578 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:05.382867  761578 ssh_runner.go:195] Run: ls
	I0917 00:05:05.387402  761578 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:05.391926  761578 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:05.391954  761578 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:05.391966  761578 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:05.391994  761578 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:05.392240  761578 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:05.408586  761578 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:05.408608  761578 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:05.408615  761578 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:05.413978  665399 retry.go:31] will retry after 3.230847369s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (703.718243ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:08.688932  761845 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:08.689211  761845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:08.689221  761845 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:08.689225  761845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:08.689409  761845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:08.689580  761845 out.go:368] Setting JSON to false
	I0917 00:05:08.689602  761845 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:08.689728  761845 notify.go:220] Checking for updates...
	I0917 00:05:08.690032  761845 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:08.690057  761845 status.go:174] checking status of ha-198834 ...
	I0917 00:05:08.690539  761845 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:08.710219  761845 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:08.710272  761845 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:08.710599  761845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:08.727511  761845 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:08.727747  761845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:08.727788  761845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:08.745655  761845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:08.839309  761845 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:08.844019  761845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:08.856609  761845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:08.910483  761845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:08.900603756 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:08.911141  761845 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:08.911176  761845 api_server.go:166] Checking apiserver status ...
	I0917 00:05:08.911219  761845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:08.924250  761845 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:08.934346  761845 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:08.934405  761845 ssh_runner.go:195] Run: ls
	I0917 00:05:08.938145  761845 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:08.943138  761845 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:08.943171  761845 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:08.943181  761845 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:08.943200  761845 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:08.943438  761845 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:08.960409  761845 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:08.960434  761845 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:08.960730  761845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:08.977975  761845 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:08.978313  761845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:08.978362  761845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:08.996069  761845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:09.089310  761845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:09.102429  761845 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:09.102458  761845 api_server.go:166] Checking apiserver status ...
	I0917 00:05:09.102495  761845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:09.113924  761845 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:09.123782  761845 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:09.123842  761845 ssh_runner.go:195] Run: ls
	I0917 00:05:09.127520  761845 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:09.133812  761845 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:09.133842  761845 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:09.133854  761845 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:09.133873  761845 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:09.134181  761845 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:09.152052  761845 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:09.152076  761845 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:09.152330  761845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:09.169156  761845 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:09.169416  761845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:09.169461  761845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:09.185865  761845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:09.279318  761845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:09.291824  761845 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:09.291858  761845 api_server.go:166] Checking apiserver status ...
	I0917 00:05:09.292032  761845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:09.305733  761845 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:09.316203  761845 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:09.316261  761845 ssh_runner.go:195] Run: ls
	I0917 00:05:09.320053  761845 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:09.325789  761845 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:09.325814  761845 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:09.325823  761845 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:09.325853  761845 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:09.326152  761845 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:09.344302  761845 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:09.344329  761845 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:09.344337  761845 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:09.350660  665399 retry.go:31] will retry after 1.743427514s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (708.783518ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:11.140268  762081 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:11.140521  762081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:11.140528  762081 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:11.140532  762081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:11.140728  762081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:11.140924  762081 out.go:368] Setting JSON to false
	I0917 00:05:11.140951  762081 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:11.141081  762081 notify.go:220] Checking for updates...
	I0917 00:05:11.141386  762081 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:11.141412  762081 status.go:174] checking status of ha-198834 ...
	I0917 00:05:11.141850  762081 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:11.160050  762081 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:11.160101  762081 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:11.160493  762081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:11.177027  762081 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:11.177295  762081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:11.177353  762081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:11.195203  762081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:11.288551  762081 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:11.293590  762081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:11.306659  762081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:11.365438  762081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:11.354561745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:11.365997  762081 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:11.366029  762081 api_server.go:166] Checking apiserver status ...
	I0917 00:05:11.366065  762081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:11.378454  762081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:11.389008  762081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:11.389079  762081 ssh_runner.go:195] Run: ls
	I0917 00:05:11.393015  762081 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:11.397309  762081 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:11.397335  762081 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:11.397346  762081 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:11.397361  762081 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:11.397594  762081 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:11.414794  762081 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:11.414826  762081 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:11.415136  762081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:11.432461  762081 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:11.432798  762081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:11.432870  762081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:11.450432  762081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:11.543598  762081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:11.556670  762081 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:11.556699  762081 api_server.go:166] Checking apiserver status ...
	I0917 00:05:11.556737  762081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:11.568574  762081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:11.578991  762081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:11.579075  762081 ssh_runner.go:195] Run: ls
	I0917 00:05:11.583276  762081 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:11.587871  762081 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:11.587901  762081 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:11.587927  762081 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:11.587952  762081 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:11.588302  762081 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:11.605880  762081 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:11.605922  762081 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:11.606185  762081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:11.622920  762081 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:11.623240  762081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:11.623299  762081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:11.639877  762081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:11.732370  762081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:11.745614  762081 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:11.745645  762081 api_server.go:166] Checking apiserver status ...
	I0917 00:05:11.745690  762081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:11.758036  762081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:11.768741  762081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:11.768806  762081 ssh_runner.go:195] Run: ls
	I0917 00:05:11.772527  762081 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:11.776763  762081 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:11.776786  762081 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:11.776795  762081 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:11.776810  762081 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:11.777094  762081 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:11.798618  762081 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:11.798646  762081 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:11.798654  762081 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:11.804580  665399 retry.go:31] will retry after 2.568763829s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (705.0021ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:14.420398  762342 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:14.420494  762342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:14.420499  762342 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:14.420504  762342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:14.420754  762342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:14.420973  762342 out.go:368] Setting JSON to false
	I0917 00:05:14.420998  762342 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:14.421068  762342 notify.go:220] Checking for updates...
	I0917 00:05:14.421583  762342 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:14.421622  762342 status.go:174] checking status of ha-198834 ...
	I0917 00:05:14.422164  762342 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:14.441666  762342 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:14.441694  762342 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:14.441974  762342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:14.461094  762342 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:14.461349  762342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:14.461387  762342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:14.479977  762342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:14.574465  762342 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:14.579036  762342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:14.591161  762342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:14.644336  762342 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:14.634672961 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:14.644843  762342 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:14.644870  762342 api_server.go:166] Checking apiserver status ...
	I0917 00:05:14.644924  762342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:14.657386  762342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:14.667638  762342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:14.667697  762342 ssh_runner.go:195] Run: ls
	I0917 00:05:14.671481  762342 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:14.675977  762342 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:14.676001  762342 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:14.676012  762342 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:14.676029  762342 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:14.676310  762342 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:14.694067  762342 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:14.694103  762342 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:14.694374  762342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:14.710075  762342 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:14.710424  762342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:14.710473  762342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:14.727252  762342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:14.820233  762342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:14.832613  762342 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:14.832640  762342 api_server.go:166] Checking apiserver status ...
	I0917 00:05:14.832677  762342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:14.845432  762342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:14.857587  762342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:14.857635  762342 ssh_runner.go:195] Run: ls
	I0917 00:05:14.861339  762342 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:14.865385  762342 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:14.865415  762342 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:14.865426  762342 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:14.865447  762342 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:14.865742  762342 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:14.883638  762342 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:14.883663  762342 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:14.883965  762342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:14.901272  762342 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:14.901592  762342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:14.901640  762342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:14.918206  762342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:15.013139  762342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:15.025600  762342 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:15.025630  762342 api_server.go:166] Checking apiserver status ...
	I0917 00:05:15.025678  762342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:15.037791  762342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:15.048034  762342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:15.048089  762342 ssh_runner.go:195] Run: ls
	I0917 00:05:15.051865  762342 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:15.056238  762342 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:15.056262  762342 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:15.056272  762342 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:15.056286  762342 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:15.056517  762342 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:15.073793  762342 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:15.073817  762342 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:15.073823  762342 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:15.079608  665399 retry.go:31] will retry after 8.168962436s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (715.998346ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:23.299054  762698 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:23.299365  762698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:23.299379  762698 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:23.299385  762698 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:23.299601  762698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:23.299817  762698 out.go:368] Setting JSON to false
	I0917 00:05:23.299842  762698 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:23.300008  762698 notify.go:220] Checking for updates...
	I0917 00:05:23.300346  762698 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:23.300376  762698 status.go:174] checking status of ha-198834 ...
	I0917 00:05:23.301073  762698 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:23.320410  762698 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:23.320437  762698 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:23.320720  762698 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:23.338195  762698 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:23.338632  762698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:23.338714  762698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:23.356249  762698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:23.450947  762698 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:23.455614  762698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:23.468775  762698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:23.524727  762698 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:23.514125844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:23.525329  762698 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:23.525361  762698 api_server.go:166] Checking apiserver status ...
	I0917 00:05:23.525400  762698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:23.538580  762698 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:23.548724  762698 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:23.548777  762698 ssh_runner.go:195] Run: ls
	I0917 00:05:23.552545  762698 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:23.556936  762698 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:23.556962  762698 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:23.556987  762698 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:23.557012  762698 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:23.557275  762698 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:23.574935  762698 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:23.574965  762698 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:23.575213  762698 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:23.592733  762698 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:23.593021  762698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:23.593063  762698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:23.610991  762698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:23.705473  762698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:23.718540  762698 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:23.718569  762698 api_server.go:166] Checking apiserver status ...
	I0917 00:05:23.718606  762698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:23.730457  762698 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:23.741556  762698 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:23.741618  762698 ssh_runner.go:195] Run: ls
	I0917 00:05:23.745666  762698 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:23.750157  762698 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:23.750193  762698 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:23.750203  762698 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:23.750232  762698 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:23.750515  762698 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:23.768881  762698 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:23.768958  762698 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:23.769297  762698 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:23.788055  762698 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:23.788392  762698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:23.788461  762698 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:23.807052  762698 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:23.902437  762698 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:23.915119  762698 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:23.915170  762698 api_server.go:166] Checking apiserver status ...
	I0917 00:05:23.915226  762698 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:23.927205  762698 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:23.938387  762698 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:23.938446  762698 ssh_runner.go:195] Run: ls
	I0917 00:05:23.943232  762698 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:23.947613  762698 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:23.947638  762698 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:23.947647  762698 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:23.947663  762698 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:23.948035  762698 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:23.965360  762698 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:23.965383  762698 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:23.965391  762698 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:23.971965  665399 retry.go:31] will retry after 14.231178435s: exit status 7
E0917 00:05:36.252081  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (711.376239ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:38.251583  763341 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:38.251709  763341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:38.251718  763341 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:38.251722  763341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:38.251940  763341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:38.252120  763341 out.go:368] Setting JSON to false
	I0917 00:05:38.252142  763341 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:38.252277  763341 notify.go:220] Checking for updates...
	I0917 00:05:38.252576  763341 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:38.252615  763341 status.go:174] checking status of ha-198834 ...
	I0917 00:05:38.253160  763341 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:38.271957  763341 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:38.271991  763341 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:38.272317  763341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:38.290632  763341 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:38.290982  763341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:38.291047  763341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:38.309142  763341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:38.404080  763341 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:38.408984  763341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:38.420855  763341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:38.476139  763341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:38.466812055 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:38.476680  763341 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:38.476717  763341 api_server.go:166] Checking apiserver status ...
	I0917 00:05:38.476766  763341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:38.490606  763341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:38.501501  763341 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:38.501559  763341 ssh_runner.go:195] Run: ls
	I0917 00:05:38.505732  763341 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:38.510161  763341 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:38.510183  763341 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:38.510193  763341 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:38.510215  763341 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:38.510438  763341 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:38.527895  763341 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:38.527948  763341 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:38.528346  763341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:38.545606  763341 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:38.545938  763341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:38.546006  763341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:38.564078  763341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:38.657635  763341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:38.670811  763341 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:38.670841  763341 api_server.go:166] Checking apiserver status ...
	I0917 00:05:38.670875  763341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:38.682785  763341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:38.693586  763341 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:38.693635  763341 ssh_runner.go:195] Run: ls
	I0917 00:05:38.697361  763341 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:38.701828  763341 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:38.701853  763341 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:38.701899  763341 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:38.701956  763341 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:38.702271  763341 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:38.718613  763341 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:38.718636  763341 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:38.718880  763341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:38.737017  763341 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:38.737289  763341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:38.737344  763341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:38.755092  763341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:38.848888  763341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:38.862548  763341 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:38.862577  763341 api_server.go:166] Checking apiserver status ...
	I0917 00:05:38.862610  763341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:38.874900  763341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:38.885096  763341 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:38.885155  763341 ssh_runner.go:195] Run: ls
	I0917 00:05:38.888869  763341 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:38.893365  763341 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:38.893390  763341 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:38.893399  763341 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:38.893415  763341 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:38.893655  763341 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:38.910806  763341 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:38.910828  763341 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:38.910835  763341 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:05:38.916143  665399 retry.go:31] will retry after 10.770359924s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (713.165283ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:05:49.731723  763687 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:05:49.732553  763687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:49.732569  763687 out.go:374] Setting ErrFile to fd 2...
	I0917 00:05:49.732576  763687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:05:49.733212  763687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:05:49.733431  763687 out.go:368] Setting JSON to false
	I0917 00:05:49.733452  763687 mustload.go:65] Loading cluster: ha-198834
	I0917 00:05:49.733586  763687 notify.go:220] Checking for updates...
	I0917 00:05:49.733831  763687 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:05:49.733854  763687 status.go:174] checking status of ha-198834 ...
	I0917 00:05:49.734307  763687 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:05:49.755662  763687 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:05:49.755698  763687 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:49.756056  763687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:05:49.774658  763687 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:05:49.774929  763687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:49.774996  763687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:05:49.794885  763687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:05:49.889610  763687 ssh_runner.go:195] Run: systemctl --version
	I0917 00:05:49.894159  763687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:49.905718  763687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:05:49.960113  763687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:05:49.950241521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:05:49.960886  763687 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:49.960943  763687 api_server.go:166] Checking apiserver status ...
	I0917 00:05:49.960990  763687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:49.974372  763687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup
	W0917 00:05:49.984564  763687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2302/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:49.984638  763687 ssh_runner.go:195] Run: ls
	I0917 00:05:49.988698  763687 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:49.995474  763687 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:49.995504  763687 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:05:49.995515  763687 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:49.995533  763687 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:05:49.995770  763687 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:05:50.013759  763687 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:05:50.013787  763687 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:50.014159  763687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:05:50.031483  763687 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:05:50.031722  763687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:50.031759  763687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:05:50.049603  763687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:05:50.144768  763687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:50.158101  763687 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:50.158134  763687 api_server.go:166] Checking apiserver status ...
	I0917 00:05:50.158179  763687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:50.170138  763687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup
	W0917 00:05:50.180319  763687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4762/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:50.180375  763687 ssh_runner.go:195] Run: ls
	I0917 00:05:50.184573  763687 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:50.188859  763687 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:50.188881  763687 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:05:50.188890  763687 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:50.188942  763687 status.go:174] checking status of ha-198834-m03 ...
	I0917 00:05:50.189184  763687 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:05:50.207143  763687 status.go:371] ha-198834-m03 host status = "Running" (err=<nil>)
	I0917 00:05:50.207172  763687 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:50.207665  763687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:05:50.226177  763687 host.go:66] Checking if "ha-198834-m03" exists ...
	I0917 00:05:50.226452  763687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:05:50.226504  763687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:05:50.244327  763687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:05:50.337109  763687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:05:50.349570  763687 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:05:50.349600  763687 api_server.go:166] Checking apiserver status ...
	I0917 00:05:50.349637  763687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:05:50.361985  763687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	W0917 00:05:50.371991  763687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:05:50.372046  763687 ssh_runner.go:195] Run: ls
	I0917 00:05:50.375702  763687 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:05:50.380083  763687 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:05:50.380113  763687 status.go:463] ha-198834-m03 apiserver status = Running (err=<nil>)
	I0917 00:05:50.380125  763687 status.go:176] ha-198834-m03 status: &{Name:ha-198834-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:05:50.380144  763687 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:05:50.380374  763687 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:05:50.397441  763687 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:05:50.397467  763687 status.go:384] host is not running, skipping remaining checks
	I0917 00:05:50.397473  763687 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 722917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:57:02.530585618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6698b0ad85a9078b37114c4e66646c6dc7a67a706d28557d80b29fea1d15d512",
	            "SandboxKey": "/var/run/docker/netns/6698b0ad85a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:eb:f5:3a:ee:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "669cb4f772890bad35a4ad4cdb1934f42912d7e03fc353fd08c3e3a046cfba54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.012195838s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m03_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt                                                           │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ node    │ ha-198834 node stop m02 --alsologtostderr -v 5                                                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ node    │ ha-198834 node start m02 --alsologtostderr -v 5                                                                                    │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:05 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:56:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:56:58.042095  722351 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:58.042245  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042257  722351 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:58.042263  722351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:58.042455  722351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:58.043028  722351 out.go:368] Setting JSON to false
	I0916 23:56:58.043951  722351 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9550,"bootTime":1758057468,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:58.044043  722351 start.go:140] virtualization: kvm guest
	I0916 23:56:58.045935  722351 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:58.047229  722351 notify.go:220] Checking for updates...
	I0916 23:56:58.047241  722351 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:58.048693  722351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:58.049858  722351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:58.051172  722351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:58.052335  722351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:58.053390  722351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:58.054603  722351 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:58.077260  722351 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:58.077444  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.132853  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.122248025 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.132998  722351 docker.go:318] overlay module found
	I0916 23:56:58.135611  722351 out.go:179] * Using the docker driver based on user configuration
	I0916 23:56:58.136750  722351 start.go:304] selected driver: docker
	I0916 23:56:58.136770  722351 start.go:918] validating driver "docker" against <nil>
	I0916 23:56:58.136782  722351 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:58.137364  722351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:58.190249  722351 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-09-16 23:56:58.179811473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:58.190455  722351 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:56:58.190736  722351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:56:58.192641  722351 out.go:179] * Using Docker driver with root privileges
	I0916 23:56:58.193978  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:56:58.194069  722351 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 23:56:58.194094  722351 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:56:58.194188  722351 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:58.195605  722351 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0916 23:56:58.196688  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:56:58.197669  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:56:58.198952  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.199018  722351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0916 23:56:58.199034  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:56:58.199064  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:56:58.199149  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:56:58.199167  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:56:58.199618  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:56:58.199650  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json: {Name:mkfd30616e0167206552e80675557cfcc4fee172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:56:58.218451  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:56:58.218470  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:56:58.218487  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:56:58.218525  722351 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:56:58.218643  722351 start.go:364] duration metric: took 94.227µs to acquireMachinesLock for "ha-198834"
	I0916 23:56:58.218683  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:56:58.218779  722351 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:56:58.220943  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:56:58.221292  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:56:58.221335  722351 client.go:168] LocalClient.Create starting
	I0916 23:56:58.221405  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:56:58.221441  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221461  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221543  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:56:58.221570  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:56:58.221588  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:56:58.221956  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:56:58.238665  722351 cli_runner.go:211] docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:56:58.238743  722351 network_create.go:284] running [docker network inspect ha-198834] to gather additional debugging logs...
	I0916 23:56:58.238769  722351 cli_runner.go:164] Run: docker network inspect ha-198834
	W0916 23:56:58.254999  722351 cli_runner.go:211] docker network inspect ha-198834 returned with exit code 1
	I0916 23:56:58.255086  722351 network_create.go:287] error running [docker network inspect ha-198834]: docker network inspect ha-198834: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-198834 not found
	I0916 23:56:58.255122  722351 network_create.go:289] output of [docker network inspect ha-198834]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-198834 not found
	
	** /stderr **
	I0916 23:56:58.255285  722351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:56:58.272422  722351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b56820}
	I0916 23:56:58.272473  722351 network_create.go:124] attempt to create docker network ha-198834 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:56:58.272524  722351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-198834 ha-198834
	I0916 23:56:58.332062  722351 network_create.go:108] docker network ha-198834 192.168.49.0/24 created
	I0916 23:56:58.332109  722351 kic.go:121] calculated static IP "192.168.49.2" for the "ha-198834" container
	I0916 23:56:58.332180  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:56:58.347722  722351 cli_runner.go:164] Run: docker volume create ha-198834 --label name.minikube.sigs.k8s.io=ha-198834 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:56:58.365722  722351 oci.go:103] Successfully created a docker volume ha-198834
	I0916 23:56:58.365811  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --entrypoint /usr/bin/test -v ha-198834:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:56:58.752716  722351 oci.go:107] Successfully prepared a docker volume ha-198834
	I0916 23:56:58.752766  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:56:58.752791  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:56:58.752860  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:02.431811  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.678879308s)
	I0916 23:57:02.431852  722351 kic.go:203] duration metric: took 3.679056906s to extract preloaded images to volume ...
	W0916 23:57:02.431981  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:02.432030  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:02.432094  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:02.483868  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834 --name ha-198834 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834 --network ha-198834 --ip 192.168.49.2 --volume ha-198834:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:02.749244  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Running}}
	I0916 23:57:02.769059  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:02.787342  722351 cli_runner.go:164] Run: docker exec ha-198834 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:02.836161  722351 oci.go:144] the created container "ha-198834" has a running status.
	I0916 23:57:02.836195  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa...
	I0916 23:57:03.023198  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:03.023332  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:03.051071  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.071057  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:03.071081  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:03.121506  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:03.138447  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:03.138553  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.156407  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.156657  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.156674  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:03.295893  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.295938  722351 ubuntu.go:182] provisioning hostname "ha-198834"
	I0916 23:57:03.296023  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.314748  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.314993  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.315008  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0916 23:57:03.463642  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0916 23:57:03.463716  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.480946  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.481224  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.481264  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:03.616528  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:03.616561  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:03.616587  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:03.616603  722351 provision.go:84] configureAuth start
	I0916 23:57:03.616666  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:03.633505  722351 provision.go:143] copyHostCerts
	I0916 23:57:03.633553  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633590  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:03.633601  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:03.633689  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:03.633796  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633824  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:03.633834  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:03.633870  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:03.633969  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.633996  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:03.634007  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:03.634050  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:03.634188  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0916 23:57:03.786555  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:03.786617  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:03.786691  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.804115  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:03.900955  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:03.901014  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 23:57:03.928655  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:03.928721  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:03.953468  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:03.953537  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:03.978330  722351 provision.go:87] duration metric: took 361.708211ms to configureAuth
	I0916 23:57:03.978356  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:03.978536  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:03.978599  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:03.995700  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:03.995934  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:03.995954  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:04.131514  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:04.131541  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:04.131675  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:04.131752  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.148752  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.148996  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.149060  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:04.298185  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:04.298270  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:04.315091  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:04.315309  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 23:57:04.315326  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:05.420254  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:04.295122578 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:05.420296  722351 machine.go:96] duration metric: took 2.281822221s to provisionDockerMachine
	I0916 23:57:05.420315  722351 client.go:171] duration metric: took 7.198967751s to LocalClient.Create
	I0916 23:57:05.420340  722351 start.go:167] duration metric: took 7.199048943s to libmachine.API.Create "ha-198834"
	I0916 23:57:05.420350  722351 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0916 23:57:05.420364  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:05.420443  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:05.420495  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.437726  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.536164  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:05.539580  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:05.539616  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:05.539633  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:05.539639  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:05.539653  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:05.539713  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:05.539819  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:05.539836  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:05.540001  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:05.548691  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:05.575226  722351 start.go:296] duration metric: took 154.859714ms for postStartSetup
	I0916 23:57:05.575586  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.591876  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:05.592351  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:05.592412  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.609076  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.701881  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:05.706378  722351 start.go:128] duration metric: took 7.487581015s to createHost
	I0916 23:57:05.706400  722351 start.go:83] releasing machines lock for "ha-198834", held for 7.487744986s
	I0916 23:57:05.706457  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0916 23:57:05.723047  722351 ssh_runner.go:195] Run: cat /version.json
	I0916 23:57:05.723106  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.723117  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:05.723202  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:05.739830  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.739978  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:05.900291  722351 ssh_runner.go:195] Run: systemctl --version
	I0916 23:57:05.905029  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:05.909440  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:05.939050  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:05.939153  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:05.968631  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:05.968659  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:05.968693  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:05.968830  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:05.985490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:05.997349  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:06.007949  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:06.008036  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:06.018490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.028804  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:06.039330  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:06.049816  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:06.059493  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:06.069825  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:06.080461  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:06.091039  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:06.100019  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:06.109126  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.178675  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:06.251706  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:06.251760  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:06.251809  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:06.264383  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.275792  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:06.294666  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:06.306227  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:06.317564  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:06.334759  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:06.338327  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:06.348543  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:06.366680  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:06.432452  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:06.496386  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:06.496496  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:06.515617  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:06.527317  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:06.590441  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:07.360810  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:07.372759  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:07.384493  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.396808  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:07.466973  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:07.538629  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.607976  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:07.630119  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:07.642121  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:07.709050  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:07.784177  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:07.797686  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:07.797763  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:07.801576  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:07.801630  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:07.804977  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:07.837851  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:07.837957  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.862098  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:07.888678  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:07.888755  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:07.905526  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:07.909605  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:07.921677  722351 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:57:07.921793  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:07.921842  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.943020  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.943041  722351 docker.go:621] Images already preloaded, skipping extraction
	I0916 23:57:07.943097  722351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 23:57:07.963583  722351 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 23:57:07.963609  722351 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:57:07.963623  722351 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0916 23:57:07.963750  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:07.963822  722351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 23:57:08.012977  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:08.013007  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:08.013021  722351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:57:08.013044  722351 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:57:08.013180  722351 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:57:08.013203  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:08.013244  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:08.026529  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:08.026652  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:08.026716  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:08.036301  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:08.036379  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 23:57:08.046128  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0916 23:57:08.064738  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:08.083216  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:57:08.101114  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 23:57:08.121332  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:08.125035  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:08.136734  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:08.207460  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:08.231438  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0916 23:57:08.231468  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:08.231491  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.231634  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:08.231682  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:08.231692  722351 certs.go:256] generating profile certs ...
	I0916 23:57:08.231748  722351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:08.231761  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt with IP's: []
	I0916 23:57:08.595971  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt ...
	I0916 23:57:08.596008  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt: {Name:mk045c8005e18afdd173496398fb640e85421530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596237  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key ...
	I0916 23:57:08.596255  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key: {Name:mkec7f349d5172bad8ab50dce27926cf4a2810b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.596372  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28
	I0916 23:57:08.596390  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 23:57:08.930707  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 ...
	I0916 23:57:08.930740  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28: {Name:mke8743bf1c0faa0b20cb0336c0e1879fcb77e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.930956  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 ...
	I0916 23:57:08.930975  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28: {Name:mkd63d446f2fe51bc154cd1e5df7f39c484f911b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:08.931094  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:08.931221  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.c9168e28 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:08.931283  722351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:08.931298  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt with IP's: []
	I0916 23:57:09.286083  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt ...
	I0916 23:57:09.286118  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt: {Name:mk7d8f9e6931aff0b35e5110e6bb582a3f00c824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286322  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key ...
	I0916 23:57:09.286339  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key: {Name:mkaeef389ff7f9a0b6729cce56a45b0b3aa13296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:09.286448  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:09.286467  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:09.286479  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:09.286489  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:09.286513  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:09.286527  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:09.286538  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:09.286550  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:09.286602  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:09.286641  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:09.286650  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:09.286674  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:09.286702  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:09.286730  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:09.286767  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:09.286792  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.286805  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.286817  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.287381  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:09.312982  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:09.337940  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:09.362347  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:09.386557  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 23:57:09.412140  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:09.436893  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:09.461871  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:09.487876  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:09.516060  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:09.541440  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:09.567069  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:57:09.585649  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:09.591504  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:09.602004  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605727  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.605791  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:09.612679  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:09.622556  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:09.632414  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636379  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.636441  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:09.643659  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:09.653893  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:09.663837  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667554  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.667899  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:09.675833  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:09.686032  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:09.689851  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:09.689923  722351 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:09.690062  722351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 23:57:09.708774  722351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:57:09.718368  722351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:57:09.727825  722351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:57:09.727888  722351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:57:09.738106  722351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:57:09.738126  722351 kubeadm.go:157] found existing configuration files:
	
	I0916 23:57:09.738165  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:57:09.747962  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:57:09.748017  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:57:09.757385  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:57:09.766772  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:57:09.766839  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:57:09.775735  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.784848  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:57:09.784955  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:57:09.793751  722351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:57:09.803170  722351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:57:09.803229  722351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:57:09.811944  722351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:57:09.867145  722351 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:57:09.919246  722351 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:57:19.614241  722351 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:57:19.614308  722351 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:57:19.614466  722351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:57:19.614561  722351 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:57:19.614607  722351 kubeadm.go:310] OS: Linux
	I0916 23:57:19.614692  722351 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:57:19.614771  722351 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:57:19.614837  722351 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:57:19.614899  722351 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:57:19.614977  722351 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:57:19.615057  722351 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:57:19.615125  722351 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:57:19.615202  722351 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:57:19.615307  722351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:57:19.615454  722351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:57:19.615594  722351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:57:19.615688  722351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:57:19.618162  722351 out.go:252]   - Generating certificates and keys ...
	I0916 23:57:19.618260  722351 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:57:19.618349  722351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:57:19.618445  722351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:57:19.618533  722351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:57:19.618635  722351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:57:19.618717  722351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:57:19.618792  722351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:57:19.618993  722351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619071  722351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:57:19.619249  722351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198834 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:57:19.619335  722351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:57:19.619434  722351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:57:19.619517  722351 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:57:19.619599  722351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:57:19.619679  722351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:57:19.619763  722351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:57:19.619846  722351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:57:19.619990  722351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:57:19.620069  722351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:57:19.620183  722351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:57:19.620281  722351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:57:19.621487  722351 out.go:252]   - Booting up control plane ...
	I0916 23:57:19.621595  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:57:19.621704  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:57:19.621799  722351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:57:19.621956  722351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:57:19.622047  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:57:19.622137  722351 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:57:19.622213  722351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:57:19.622246  722351 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:57:19.622371  722351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:57:19.622503  722351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:57:19.622564  722351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000941296s
	I0916 23:57:19.622663  722351 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:57:19.622778  722351 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:57:19.622893  722351 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:57:19.623021  722351 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:57:19.623126  722351 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.545161134s
	I0916 23:57:19.623210  722351 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.1638517s
	I0916 23:57:19.623273  722351 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001738286s
	I0916 23:57:19.623369  722351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:57:19.623478  722351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:57:19.623551  722351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:57:19.623792  722351 kubeadm.go:310] [mark-control-plane] Marking the node ha-198834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:57:19.623845  722351 kubeadm.go:310] [bootstrap-token] Using token: wg2on6.splp3qzu9xv61vdp
	I0916 23:57:19.625599  722351 out.go:252]   - Configuring RBAC rules ...
	I0916 23:57:19.625697  722351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:57:19.625769  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:57:19.625966  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:57:19.626123  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:57:19.626261  722351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:57:19.626367  722351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:57:19.626473  722351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:57:19.626522  722351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:57:19.626564  722351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:57:19.626570  722351 kubeadm.go:310] 
	I0916 23:57:19.626631  722351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:57:19.626643  722351 kubeadm.go:310] 
	I0916 23:57:19.626737  722351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:57:19.626747  722351 kubeadm.go:310] 
	I0916 23:57:19.626781  722351 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:57:19.626863  722351 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:57:19.626960  722351 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:57:19.626973  722351 kubeadm.go:310] 
	I0916 23:57:19.627050  722351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:57:19.627058  722351 kubeadm.go:310] 
	I0916 23:57:19.627113  722351 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:57:19.627119  722351 kubeadm.go:310] 
	I0916 23:57:19.627167  722351 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:57:19.627238  722351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:57:19.627297  722351 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:57:19.627302  722351 kubeadm.go:310] 
	I0916 23:57:19.627381  722351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:57:19.627449  722351 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:57:19.627454  722351 kubeadm.go:310] 
	I0916 23:57:19.627525  722351 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627618  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0916 23:57:19.627647  722351 kubeadm.go:310] 	--control-plane 
	I0916 23:57:19.627653  722351 kubeadm.go:310] 
	I0916 23:57:19.627725  722351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:57:19.627733  722351 kubeadm.go:310] 
	I0916 23:57:19.627801  722351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wg2on6.splp3qzu9xv61vdp \
	I0916 23:57:19.627921  722351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0916 23:57:19.627933  722351 cni.go:84] Creating CNI manager for ""
	I0916 23:57:19.627939  722351 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 23:57:19.630017  722351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:57:19.631017  722351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:57:19.635194  722351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:57:19.635211  722351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:57:19.655634  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:57:19.855102  722351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:57:19.855186  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:19.855265  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834 minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=true
	I0916 23:57:19.863538  722351 ops.go:34] apiserver oom_adj: -16
	I0916 23:57:19.931275  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.432025  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:20.932100  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.432105  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:21.932376  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.432213  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:22.931583  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.431392  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:23.932193  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.431927  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:57:24.504799  722351 kubeadm.go:1105] duration metric: took 4.649687278s to wait for elevateKubeSystemPrivileges
	I0916 23:57:24.504835  722351 kubeadm.go:394] duration metric: took 14.81493092s to StartCluster
	I0916 23:57:24.504858  722351 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.504967  722351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:57:24.505808  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:24.506080  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:57:24.506079  722351 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:24.506102  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:24.506120  722351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 23:57:24.506215  722351 addons.go:69] Setting storage-provisioner=true in profile "ha-198834"
	I0916 23:57:24.506241  722351 addons.go:238] Setting addon storage-provisioner=true in "ha-198834"
	I0916 23:57:24.506236  722351 addons.go:69] Setting default-storageclass=true in profile "ha-198834"
	I0916 23:57:24.506263  722351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198834"
	I0916 23:57:24.506271  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.506311  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:24.506630  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.506797  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.527476  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:57:24.528010  722351 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0916 23:57:24.528028  722351 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0916 23:57:24.528032  722351 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 23:57:24.528036  722351 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0916 23:57:24.528039  722351 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 23:57:24.528105  722351 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0916 23:57:24.528384  722351 addons.go:238] Setting addon default-storageclass=true in "ha-198834"
	I0916 23:57:24.528420  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:24.528683  722351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:57:24.528891  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:24.530050  722351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.530067  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:57:24.530109  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.548463  722351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.548490  722351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:57:24.548552  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:24.551711  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.575963  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:24.622716  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:57:24.680948  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:57:24.725959  722351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:57:24.815565  722351 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:57:25.027949  722351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0916 23:57:25.029176  722351 addons.go:514] duration metric: took 523.059617ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 23:57:25.029216  722351 start.go:246] waiting for cluster config update ...
	I0916 23:57:25.029233  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:25.030834  722351 out.go:203] 
	I0916 23:57:25.032180  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:25.032246  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.033846  722351 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0916 23:57:25.035651  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:25.036699  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:25.038502  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.038524  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:25.038599  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:25.038624  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:25.038635  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:25.038696  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:25.064556  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:25.064575  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:25.064593  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:25.064625  722351 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:25.064737  722351 start.go:364] duration metric: took 87.928µs to acquireMachinesLock for "ha-198834-m02"
	I0916 23:57:25.064767  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:25.064852  722351 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 23:57:25.067030  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:25.067261  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:25.067302  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:25.067392  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:25.067435  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067451  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067520  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:25.067544  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:25.067561  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:25.067817  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:25.087287  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc0008ae780 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:25.087329  722351 kic.go:121] calculated static IP "192.168.49.3" for the "ha-198834-m02" container
	I0916 23:57:25.087390  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:25.104356  722351 cli_runner.go:164] Run: docker volume create ha-198834-m02 --label name.minikube.sigs.k8s.io=ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:25.128318  722351 oci.go:103] Successfully created a docker volume ha-198834-m02
	I0916 23:57:25.128423  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --entrypoint /usr/bin/test -v ha-198834-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:25.555443  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m02
	I0916 23:57:25.555486  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:25.555507  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:25.555574  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:29.769985  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.214340138s)
	I0916 23:57:29.770025  722351 kic.go:203] duration metric: took 4.214511914s to extract preloaded images to volume ...
	W0916 23:57:29.770138  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:29.770180  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:29.770230  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:29.831280  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m02 --name ha-198834-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m02 --network ha-198834 --ip 192.168.49.3 --volume ha-198834-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:57:30.118263  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Running}}
	I0916 23:57:30.140753  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.161053  722351 cli_runner.go:164] Run: docker exec ha-198834-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:57:30.204746  722351 oci.go:144] the created container "ha-198834-m02" has a running status.
	I0916 23:57:30.204782  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa...
	I0916 23:57:30.491277  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:57:30.491341  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:57:30.523169  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.546155  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:57:30.546178  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:57:30.603616  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0916 23:57:30.624695  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:57:30.624784  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.648569  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.648946  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.648966  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:57:30.800750  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.800784  722351 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0916 23:57:30.800873  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:30.822237  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:30.822505  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:30.822519  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0916 23:57:30.984206  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0916 23:57:30.984307  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.007082  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.007398  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.007430  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:57:31.152561  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:57:31.152598  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:57:31.152624  722351 ubuntu.go:190] setting up certificates
	I0916 23:57:31.152644  722351 provision.go:84] configureAuth start
	I0916 23:57:31.152709  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:31.171931  722351 provision.go:143] copyHostCerts
	I0916 23:57:31.171978  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172008  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:57:31.172014  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:57:31.172081  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:57:31.172159  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172181  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:57:31.172185  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:57:31.172216  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:57:31.172262  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172279  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:57:31.172287  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:57:31.172310  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:57:31.172361  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0916 23:57:31.314068  722351 provision.go:177] copyRemoteCerts
	I0916 23:57:31.314146  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:57:31.314208  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.336792  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:31.442195  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:57:31.442269  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:57:31.472780  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:57:31.472841  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:57:31.499569  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:57:31.499653  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 23:57:31.530277  722351 provision.go:87] duration metric: took 377.61476ms to configureAuth
	I0916 23:57:31.530311  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:57:31.530528  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:31.530587  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.548573  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.548821  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.548841  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:57:31.695327  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:57:31.695357  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:57:31.695559  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:57:31.695639  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.715926  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.716269  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.716384  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:57:31.879960  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:57:31.880054  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:31.901465  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:57:31.901783  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 23:57:31.901817  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:57:33.107385  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:57:31.877658246 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:57:33.107432  722351 machine.go:96] duration metric: took 2.482713737s to provisionDockerMachine
	I0916 23:57:33.107448  722351 client.go:171] duration metric: took 8.040135103s to LocalClient.Create
	I0916 23:57:33.107471  722351 start.go:167] duration metric: took 8.040214449s to libmachine.API.Create "ha-198834"
	I0916 23:57:33.107480  722351 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0916 23:57:33.107493  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:57:33.107570  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:57:33.107624  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.129478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.235200  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:57:33.239799  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:57:33.239842  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:57:33.239854  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:57:33.239862  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:57:33.239881  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:57:33.239961  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:57:33.240070  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:57:33.240085  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:57:33.240211  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:57:33.252619  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:33.291135  722351 start.go:296] duration metric: took 183.636707ms for postStartSetup
	I0916 23:57:33.291600  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.313645  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:33.314041  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:57:33.314103  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.337314  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.439716  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:57:33.445408  722351 start.go:128] duration metric: took 8.380530846s to createHost
	I0916 23:57:33.445437  722351 start.go:83] releasing machines lock for "ha-198834-m02", held for 8.380681461s
	I0916 23:57:33.445500  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0916 23:57:33.469661  722351 out.go:179] * Found network options:
	I0916 23:57:33.471226  722351 out.go:179]   - NO_PROXY=192.168.49.2
	W0916 23:57:33.472373  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:57:33.472429  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:57:33.472520  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:57:33.472550  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:57:33.472570  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.472621  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0916 23:57:33.495822  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.496478  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0916 23:57:33.601441  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:57:33.704002  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:57:33.704085  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:57:33.742848  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:57:33.742881  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:33.742929  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:33.743066  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:33.765394  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:57:33.781702  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:57:33.796106  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:57:33.796186  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:57:33.811490  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.825594  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:57:33.840006  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:57:33.853819  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:57:33.867424  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:57:33.882022  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:57:33.896562  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:57:33.910813  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:57:33.923436  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:57:33.936892  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.033978  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:57:34.137820  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:57:34.137955  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:57:34.138026  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:57:34.154788  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.170769  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:57:34.190397  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:57:34.207526  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:57:34.224333  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:57:34.249827  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:57:34.255532  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:57:34.270253  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:57:34.296311  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:57:34.391517  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:57:34.486390  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:57:34.486452  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:57:34.512957  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:57:34.529696  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:34.623612  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:57:35.389236  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:57:35.402665  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:57:35.418828  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.433733  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:57:35.524509  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:57:35.615815  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.688879  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:57:35.713552  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:57:35.729264  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:35.818355  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:57:35.908063  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:57:35.921416  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:57:35.921483  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:57:35.925600  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:57:35.925666  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:57:35.929510  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:57:35.970926  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:57:35.971002  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.001052  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:57:36.032731  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:57:36.033881  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:57:36.035387  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:36.055948  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:57:36.061767  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:36.076229  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:57:36.076482  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:36.076794  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:57:36.099199  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:36.099483  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0916 23:57:36.099498  722351 certs.go:194] generating shared ca certs ...
	I0916 23:57:36.099514  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.099667  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:57:36.099721  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:57:36.099735  722351 certs.go:256] generating profile certs ...
	I0916 23:57:36.099834  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:57:36.099867  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0916 23:57:36.099889  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 23:57:36.171638  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 ...
	I0916 23:57:36.171669  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4: {Name:mk274e4893d598b40c8fed777bc1c7c2e951159a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.171866  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 ...
	I0916 23:57:36.171885  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4: {Name:mkf2a66869f0c345fb28cc9925dc0bb02623a928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:57:36.172011  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:57:36.172195  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:57:36.172362  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:57:36.172381  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:57:36.172396  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:57:36.172415  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:57:36.172438  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:57:36.172457  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:57:36.172474  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:57:36.172493  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:57:36.172512  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:57:36.172589  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:57:36.172634  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:57:36.172648  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:57:36.172679  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:57:36.172703  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:57:36.172736  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:57:36.172796  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:57:36.172840  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.172861  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.172878  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.172963  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:36.194873  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:36.286293  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:57:36.291948  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:57:36.308150  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:57:36.312206  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:57:36.325598  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:57:36.329618  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:57:36.346110  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:57:36.350017  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:57:36.365628  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:57:36.369445  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:57:36.383675  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:57:36.387388  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:57:36.403394  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:57:36.432068  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:57:36.461592  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:57:36.491261  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:57:36.523895  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 23:57:36.552719  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:57:36.580284  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:57:36.608342  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:57:36.639670  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:57:36.672003  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:57:36.703856  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:57:36.734275  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:57:36.755638  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:57:36.777805  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:57:36.799338  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:57:36.821463  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:57:36.843600  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:57:36.867808  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:57:36.889233  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:57:36.896091  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:57:36.908363  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913145  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.913212  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:57:36.921857  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:57:36.934186  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:57:36.945282  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949180  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.949249  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:57:36.958068  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:57:36.970160  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:57:36.981053  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985350  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.985410  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:57:36.993828  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:57:37.004616  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:57:37.008764  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:57:37.008830  722351 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0916 23:57:37.008961  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:57:37.008998  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:57:37.009050  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:57:37.026582  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:57:37.026656  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:57:37.026738  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:57:37.036867  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:57:37.036974  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:57:37.046606  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:57:37.070259  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:57:37.092325  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:57:37.116853  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:57:37.120789  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:57:37.137396  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:37.223494  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:37.256254  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:57:37.256574  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:57:37.256705  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:57:37.256762  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:57:37.278264  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:57:37.435308  722351 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:37.435366  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 23:57:54.013635  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rs0rx7.1v9nwhb46wdsoqvk --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (16.578241326s)
	I0916 23:57:54.013701  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:57:54.233708  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m02 minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:57:54.308006  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:57:54.383356  722351 start.go:319] duration metric: took 17.126777498s to joinCluster
	I0916 23:57:54.383433  722351 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:54.383691  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:54.385020  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:57:54.386187  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:57:54.491315  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:57:54.505328  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:57:54.505398  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:57:54.505659  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508947  722351 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0916 23:57:56.508979  722351 node_ready.go:38] duration metric: took 2.003299323s for node "ha-198834-m02" to be "Ready" ...
	I0916 23:57:56.508998  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:57:56.509065  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:57:56.521258  722351 api_server.go:72] duration metric: took 2.137779117s to wait for apiserver process to appear ...
	I0916 23:57:56.521298  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:57:56.521326  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:57:56.527086  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:57:56.528055  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:57:56.528078  722351 api_server.go:131] duration metric: took 6.77168ms to wait for apiserver health ...
	I0916 23:57:56.528087  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:57:56.534412  722351 system_pods.go:59] 19 kube-system pods found
	I0916 23:57:56.534478  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.534486  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.534497  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.534503  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.534515  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534524  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.534535  722351 system_pods.go:61] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534541  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.534547  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.534559  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.534564  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.534667  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.534716  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534725  722351 system_pods.go:61] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.534731  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.534743  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.534748  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.534753  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.534758  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.534765  722351 system_pods.go:74] duration metric: took 6.672375ms to wait for pod list to return data ...
	I0916 23:57:56.534774  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:57:56.538351  722351 default_sa.go:45] found service account: "default"
	I0916 23:57:56.538385  722351 default_sa.go:55] duration metric: took 3.603096ms for default service account to be created ...
	I0916 23:57:56.538399  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:57:56.542274  722351 system_pods.go:86] 19 kube-system pods found
	I0916 23:57:56.542301  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:57:56.542307  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:57:56.542311  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:57:56.542314  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Pending
	I0916 23:57:56.542321  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-2vbn5": pod kindnet-2vbn5 is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542325  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:57:56.542330  722351 system_pods.go:89] "kindnet-mh8pf" [4bbbea44-3bf9-4c36-b876-fb4390d15dfc] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-mh8pf": pod kindnet-mh8pf is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542334  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:57:56.542338  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Pending
	I0916 23:57:56.542344  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:57:56.542347  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Pending
	I0916 23:57:56.542351  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:57:56.542356  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-h2fxd": pod kube-proxy-h2fxd is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542367  722351 system_pods.go:89] "kube-proxy-ld4mc" [8b35ded7-d5ce-4805-8573-9dede265d002] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-ld4mc": pod kube-proxy-ld4mc is already assigned to node "ha-198834-m02")
	I0916 23:57:56.542371  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:57:56.542375  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Pending
	I0916 23:57:56.542377  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:57:56.542380  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Pending
	I0916 23:57:56.542384  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:57:56.542393  722351 system_pods.go:126] duration metric: took 3.988364ms to wait for k8s-apps to be running ...
	I0916 23:57:56.542403  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:57:56.542447  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:57:56.554466  722351 system_svc.go:56] duration metric: took 12.054188ms WaitForService to wait for kubelet
	I0916 23:57:56.554496  722351 kubeadm.go:578] duration metric: took 2.171026353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:57:56.554519  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:57:56.557501  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557532  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557552  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:57:56.557557  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:57:56.557561  722351 node_conditions.go:105] duration metric: took 3.037317ms to run NodePressure ...
	I0916 23:57:56.557575  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:57:56.557610  722351 start.go:255] writing updated cluster config ...
	I0916 23:57:56.559549  722351 out.go:203] 
	I0916 23:57:56.561097  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:57:56.561232  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.562855  722351 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0916 23:57:56.563951  722351 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:57:56.565051  722351 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:57:56.566271  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:56.566290  722351 cache.go:58] Caching tarball of preloaded images
	I0916 23:57:56.566373  722351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:57:56.566383  722351 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 23:57:56.566485  722351 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0916 23:57:56.566581  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:57:56.586635  722351 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0916 23:57:56.586656  722351 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0916 23:57:56.586673  722351 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:57:56.586704  722351 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:57:56.586811  722351 start.go:364] duration metric: took 87.391µs to acquireMachinesLock for "ha-198834-m03"
	I0916 23:57:56.586843  722351 start.go:93] Provisioning new machine with config: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:57:56.587003  722351 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 23:57:56.589063  722351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 23:57:56.589158  722351 start.go:159] libmachine.API.Create for "ha-198834" (driver="docker")
	I0916 23:57:56.589187  722351 client.go:168] LocalClient.Create starting
	I0916 23:57:56.589263  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0916 23:57:56.589299  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589313  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589365  722351 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0916 23:57:56.589385  722351 main.go:141] libmachine: Decoding PEM data...
	I0916 23:57:56.589398  722351 main.go:141] libmachine: Parsing certificate...
	I0916 23:57:56.589634  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:57:56.607248  722351 network_create.go:77] Found existing network {name:ha-198834 subnet:0xc001595440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 23:57:56.607297  722351 kic.go:121] calculated static IP "192.168.49.4" for the "ha-198834-m03" container
	I0916 23:57:56.607371  722351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:57:56.624198  722351 cli_runner.go:164] Run: docker volume create ha-198834-m03 --label name.minikube.sigs.k8s.io=ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:57:56.642183  722351 oci.go:103] Successfully created a docker volume ha-198834-m03
	I0916 23:57:56.642258  722351 cli_runner.go:164] Run: docker run --rm --name ha-198834-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --entrypoint /usr/bin/test -v ha-198834-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:57:57.021785  722351 oci.go:107] Successfully prepared a docker volume ha-198834-m03
	I0916 23:57:57.021834  722351 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:57:57.021864  722351 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:57:57.021952  722351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:57:59.672995  722351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-198834-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.650992477s)
	I0916 23:57:59.673039  722351 kic.go:203] duration metric: took 2.651177157s to extract preloaded images to volume ...
	W0916 23:57:59.673144  722351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:57:59.673190  722351 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:57:59.673255  722351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:57:59.730169  722351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-198834-m03 --name ha-198834-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-198834-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-198834-m03 --network ha-198834 --ip 192.168.49.4 --volume ha-198834-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:58:00.013728  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Running}}
	I0916 23:58:00.034076  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.054832  722351 cli_runner.go:164] Run: docker exec ha-198834-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:58:00.109517  722351 oci.go:144] the created container "ha-198834-m03" has a running status.
	I0916 23:58:00.109546  722351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa...
	I0916 23:58:00.621029  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 23:58:00.621097  722351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:58:00.651614  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.673435  722351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:58:00.673460  722351 kic_runner.go:114] Args: [docker exec --privileged ha-198834-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:58:00.730412  722351 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0916 23:58:00.749865  722351 machine.go:93] provisionDockerMachine start ...
	I0916 23:58:00.750006  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.771445  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.771738  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.771754  722351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:58:00.920523  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:00.920553  722351 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0916 23:58:00.920616  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:00.940561  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:00.940837  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:00.940853  722351 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0916 23:58:01.103101  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0916 23:58:01.103204  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:01.125182  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:01.125511  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:01.125543  722351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:58:01.275155  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:01.275201  722351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0916 23:58:01.275231  722351 ubuntu.go:190] setting up certificates
	I0916 23:58:01.275246  722351 provision.go:84] configureAuth start
	I0916 23:58:01.275318  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:01.296305  722351 provision.go:143] copyHostCerts
	I0916 23:58:01.296378  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296426  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0916 23:58:01.296439  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0916 23:58:01.296527  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0916 23:58:01.296632  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296656  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0916 23:58:01.296682  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0916 23:58:01.296726  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0916 23:58:01.296788  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296825  722351 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0916 23:58:01.296835  722351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0916 23:58:01.296924  722351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0916 23:58:01.297040  722351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0916 23:58:02.100987  722351 provision.go:177] copyRemoteCerts
	I0916 23:58:02.101048  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:58:02.101084  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.119475  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:02.218802  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 23:58:02.218870  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:58:02.251628  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 23:58:02.251700  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:58:02.279052  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 23:58:02.279124  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:58:02.305168  722351 provision.go:87] duration metric: took 1.029902032s to configureAuth
	I0916 23:58:02.305208  722351 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:58:02.305440  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:02.305491  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.322139  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.322413  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.322428  722351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 23:58:02.459594  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0916 23:58:02.459629  722351 ubuntu.go:71] root file system type: overlay
	I0916 23:58:02.459746  722351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 23:58:02.459804  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.476657  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.476985  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.477099  722351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 23:58:02.633394  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 23:58:02.633489  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:02.651145  722351 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:02.651390  722351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 23:58:02.651410  722351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 23:58:03.800032  722351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-16 23:58:02.631485455 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0916 23:58:03.800077  722351 machine.go:96] duration metric: took 3.050188223s to provisionDockerMachine
	I0916 23:58:03.800094  722351 client.go:171] duration metric: took 7.210891992s to LocalClient.Create
	I0916 23:58:03.800121  722351 start.go:167] duration metric: took 7.210962522s to libmachine.API.Create "ha-198834"
	I0916 23:58:03.800131  722351 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0916 23:58:03.800155  722351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:58:03.800229  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:58:03.800295  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.817949  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:03.918038  722351 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:58:03.922382  722351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:58:03.922420  722351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:58:03.922430  722351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:58:03.922438  722351 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:58:03.922452  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0916 23:58:03.922512  722351 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0916 23:58:03.922607  722351 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0916 23:58:03.922620  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0916 23:58:03.922727  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 23:58:03.932298  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:03.961387  722351 start.go:296] duration metric: took 161.230642ms for postStartSetup
	I0916 23:58:03.961811  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:03.979123  722351 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0916 23:58:03.979395  722351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:58:03.979437  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:03.997520  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.091253  722351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:58:04.096537  722351 start.go:128] duration metric: took 7.509514126s to createHost
	I0916 23:58:04.096585  722351 start.go:83] releasing machines lock for "ha-198834-m03", held for 7.509743952s
	I0916 23:58:04.096660  722351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0916 23:58:04.115702  722351 out.go:179] * Found network options:
	I0916 23:58:04.117029  722351 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 23:58:04.118232  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118256  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118281  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	W0916 23:58:04.118299  722351 proxy.go:120] fail to check proxy env: Error ip not in block
	I0916 23:58:04.118395  722351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:58:04.118441  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.118449  722351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:58:04.118515  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0916 23:58:04.136875  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.137594  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0916 23:58:04.231418  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 23:58:04.311016  722351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:58:04.311108  722351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:58:04.340810  722351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:58:04.340841  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.340871  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.340997  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.359059  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0916 23:58:04.371794  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 23:58:04.383345  722351 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0916 23:58:04.383421  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0916 23:58:04.394513  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.405081  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 23:58:04.415653  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 23:58:04.426510  722351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:58:04.436405  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 23:58:04.447135  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 23:58:04.457926  722351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 23:58:04.469563  722351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:58:04.478599  722351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:58:04.488307  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:04.557785  722351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 23:58:04.636805  722351 start.go:495] detecting cgroup driver to use...
	I0916 23:58:04.636855  722351 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:58:04.636899  722351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 23:58:04.649865  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.662323  722351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:58:04.680711  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:04.693319  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 23:58:04.705665  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:04.723842  722351 ssh_runner.go:195] Run: which cri-dockerd
	I0916 23:58:04.727547  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 23:58:04.738845  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0916 23:58:04.758974  722351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 23:58:04.830471  722351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 23:58:04.900429  722351 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0916 23:58:04.900482  722351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0916 23:58:04.920093  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0916 23:58:04.931599  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:05.002855  722351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 23:58:05.807532  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:58:05.819728  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 23:58:05.832303  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:05.844347  722351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 23:58:05.916277  722351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 23:58:05.988520  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.055206  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 23:58:06.080490  722351 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0916 23:58:06.092817  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:06.162707  722351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 23:58:06.248276  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 23:58:06.261931  722351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 23:58:06.262000  722351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 23:58:06.265868  722351 start.go:563] Will wait 60s for crictl version
	I0916 23:58:06.265941  722351 ssh_runner.go:195] Run: which crictl
	I0916 23:58:06.269385  722351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:58:06.305058  722351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0916 23:58:06.305139  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.331725  722351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 23:58:06.358446  722351 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0916 23:58:06.359714  722351 out.go:179]   - env NO_PROXY=192.168.49.2
	I0916 23:58:06.360964  722351 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 23:58:06.362187  722351 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:58:06.379025  722351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:58:06.383173  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:06.394963  722351 mustload.go:65] Loading cluster: ha-198834
	I0916 23:58:06.395208  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:06.395415  722351 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0916 23:58:06.412700  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:06.412979  722351 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0916 23:58:06.412992  722351 certs.go:194] generating shared ca certs ...
	I0916 23:58:06.413008  722351 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:06.413150  722351 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0916 23:58:06.413202  722351 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0916 23:58:06.413213  722351 certs.go:256] generating profile certs ...
	I0916 23:58:06.413290  722351 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0916 23:58:06.413316  722351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0916 23:58:06.413331  722351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 23:58:07.059616  722351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 ...
	I0916 23:58:07.059648  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783: {Name:mka6f3e20ae0db98330bce12c7c53c8ceb029f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.059850  722351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 ...
	I0916 23:58:07.059873  722351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783: {Name:mk88fba5116449476945068bb066a5fae095ca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:07.060019  722351 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0916 23:58:07.060173  722351 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0916 23:58:07.060303  722351 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0916 23:58:07.060320  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 23:58:07.060332  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 23:58:07.060346  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 23:58:07.060359  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 23:58:07.060371  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 23:58:07.060383  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 23:58:07.060395  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 23:58:07.060407  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 23:58:07.060462  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0916 23:58:07.060492  722351 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0916 23:58:07.060502  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:58:07.060525  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:58:07.060546  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:58:07.060571  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0916 23:58:07.060609  722351 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0916 23:58:07.060634  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.060648  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.060666  722351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.060725  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:07.077675  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:07.167227  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 23:58:07.171339  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 23:58:07.184631  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 23:58:07.188345  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 23:58:07.201195  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 23:58:07.204727  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 23:58:07.217344  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 23:58:07.220977  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 23:58:07.233804  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 23:58:07.237296  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 23:58:07.250936  722351 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 23:58:07.254504  722351 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 23:58:07.267513  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:58:07.293250  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:58:07.319357  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:58:07.345045  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 23:58:07.370793  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 23:58:07.397411  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:58:07.422329  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:58:07.447186  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:58:07.472564  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0916 23:58:07.500373  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0916 23:58:07.526598  722351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:58:07.552426  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 23:58:07.570062  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 23:58:07.589628  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 23:58:07.609486  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 23:58:07.630629  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 23:58:07.650280  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 23:58:07.669308  722351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 23:58:07.687700  722351 ssh_runner.go:195] Run: openssl version
	I0916 23:58:07.694681  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0916 23:58:07.705784  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709662  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.709739  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0916 23:58:07.716649  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0916 23:58:07.726290  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0916 23:58:07.736118  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740041  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.740101  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0916 23:58:07.747081  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 23:58:07.757480  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:58:07.767310  722351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771054  722351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.771114  722351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:07.778013  722351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:58:07.788245  722351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:58:07.792058  722351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:58:07.792123  722351 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0916 23:58:07.792232  722351 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:58:07.792263  722351 kube-vip.go:115] generating kube-vip config ...
	I0916 23:58:07.792307  722351 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 23:58:07.805180  722351 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 23:58:07.805247  722351 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 23:58:07.805296  722351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:58:07.814610  722351 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:58:07.814678  722351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 23:58:07.825352  722351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 23:58:07.844047  722351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:58:07.862757  722351 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 23:58:07.883848  722351 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 23:58:07.887562  722351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:07.899646  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:07.974384  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:08.004718  722351 host.go:66] Checking if "ha-198834" exists ...
	I0916 23:58:08.005001  722351 start.go:317] joinCluster: &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:08.005124  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 23:58:08.005169  722351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0916 23:58:08.024622  722351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0916 23:58:08.169785  722351 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:08.169853  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 23:58:25.708852  722351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dm2r7.tavul8zm4b55qd6q --discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-198834-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (17.538975369s)
	I0916 23:58:25.708884  722351 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 23:58:25.930343  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198834-m03 minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-198834 minikube.k8s.io/primary=false
	I0916 23:58:26.006016  722351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198834-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 23:58:26.089408  722351 start.go:319] duration metric: took 18.084403561s to joinCluster
	I0916 23:58:26.089494  722351 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 23:58:26.089805  722351 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:58:26.091004  722351 out.go:179] * Verifying Kubernetes components...
	I0916 23:58:26.092246  722351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:26.200675  722351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:26.214424  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 23:58:26.214506  722351 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 23:58:26.214713  722351 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	W0916 23:58:28.218137  722351 node_ready.go:57] node "ha-198834-m03" has "Ready":"False" status (will retry)
	I0916 23:58:29.718579  722351 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0916 23:58:29.718621  722351 node_ready.go:38] duration metric: took 3.503891029s for node "ha-198834-m03" to be "Ready" ...
	I0916 23:58:29.718640  722351 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:58:29.718688  722351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:58:29.730821  722351 api_server.go:72] duration metric: took 3.641289304s to wait for apiserver process to appear ...
	I0916 23:58:29.730847  722351 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:58:29.730870  722351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:58:29.736447  722351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:58:29.737363  722351 api_server.go:141] control plane version: v1.34.0
	I0916 23:58:29.737382  722351 api_server.go:131] duration metric: took 6.528439ms to wait for apiserver health ...
	I0916 23:58:29.737390  722351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:58:29.743125  722351 system_pods.go:59] 27 kube-system pods found
	I0916 23:58:29.743154  722351 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.743159  722351 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.743162  722351 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.743166  722351 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.743169  722351 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.743172  722351 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.743179  722351 system_pods.go:61] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743182  722351 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.743189  722351 system_pods.go:61] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743193  722351 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.743198  722351 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.743202  722351 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.743206  722351 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.743209  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.743212  722351 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.743216  722351 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.743220  722351 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743227  722351 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.743231  722351 system_pods.go:61] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743236  722351 system_pods.go:61] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.743241  722351 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.743245  722351 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.743248  722351 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.743251  722351 system_pods.go:61] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.743254  722351 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.743257  722351 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.743260  722351 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.743267  722351 system_pods.go:74] duration metric: took 5.871633ms to wait for pod list to return data ...
	I0916 23:58:29.743275  722351 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:58:29.746038  722351 default_sa.go:45] found service account: "default"
	I0916 23:58:29.746059  722351 default_sa.go:55] duration metric: took 2.77496ms for default service account to be created ...
	I0916 23:58:29.746067  722351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:58:29.751428  722351 system_pods.go:86] 27 kube-system pods found
	I0916 23:58:29.751454  722351 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0916 23:58:29.751459  722351 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0916 23:58:29.751463  722351 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0916 23:58:29.751466  722351 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0916 23:58:29.751469  722351 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Pending
	I0916 23:58:29.751472  722351 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0916 23:58:29.751478  722351 system_pods.go:89] "kindnet-8klgc" [a5699c22-8aa3-4159-bb6d-261cbb15bcd1] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-8klgc": pod kindnet-8klgc is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751482  722351 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0916 23:58:29.751490  722351 system_pods.go:89] "kindnet-qmgt6" [dea81557-acc3-41e3-8160-712870aba14c] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-qmgt6": pod kindnet-qmgt6 is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751494  722351 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0916 23:58:29.751498  722351 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0916 23:58:29.751501  722351 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Pending
	I0916 23:58:29.751504  722351 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0916 23:58:29.751508  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0916 23:58:29.751512  722351 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Pending
	I0916 23:58:29.751515  722351 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0916 23:58:29.751520  722351 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-d8brp": pod kube-proxy-d8brp is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751526  722351 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0916 23:58:29.751530  722351 system_pods.go:89] "kube-proxy-nj7bh" [a5c775e6-81f4-47ce-966b-598b21714409] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-nj7bh": pod kube-proxy-nj7bh is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751535  722351 system_pods.go:89] "kube-proxy-q9ggj" [fdedb871-6b9e-4c4e-9ef7-337d04c8c30a] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-q9ggj": pod kube-proxy-q9ggj is already assigned to node "ha-198834-m03")
	I0916 23:58:29.751540  722351 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0916 23:58:29.751545  722351 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0916 23:58:29.751550  722351 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Pending
	I0916 23:58:29.751554  722351 system_pods.go:89] "kube-vip-ha-198834" [cde651e3-1550-48cb-a5dc-09d55185429b] Running
	I0916 23:58:29.751558  722351 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0916 23:58:29.751563  722351 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Pending
	I0916 23:58:29.751569  722351 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0916 23:58:29.751577  722351 system_pods.go:126] duration metric: took 5.505301ms to wait for k8s-apps to be running ...
	I0916 23:58:29.751587  722351 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:58:29.751637  722351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:58:29.764067  722351 system_svc.go:56] duration metric: took 12.467532ms WaitForService to wait for kubelet
	I0916 23:58:29.764102  722351 kubeadm.go:578] duration metric: took 3.674577242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:58:29.764127  722351 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:58:29.767676  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767699  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767712  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767717  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767721  722351 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:58:29.767724  722351 node_conditions.go:123] node cpu capacity is 8
	I0916 23:58:29.767728  722351 node_conditions.go:105] duration metric: took 3.595861ms to run NodePressure ...
	I0916 23:58:29.767739  722351 start.go:241] waiting for startup goroutines ...
	I0916 23:58:29.767761  722351 start.go:255] writing updated cluster config ...
	I0916 23:58:29.768076  722351 ssh_runner.go:195] Run: rm -f paused
	I0916 23:58:29.772054  722351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:58:29.772528  722351 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 23:58:29.776391  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781517  722351 pod_ready.go:94] pod "coredns-66bc5c9577-5wx4k" is "Ready"
	I0916 23:58:29.781544  722351 pod_ready.go:86] duration metric: took 5.128752ms for pod "coredns-66bc5c9577-5wx4k" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.781552  722351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.786524  722351 pod_ready.go:94] pod "coredns-66bc5c9577-mjbz6" is "Ready"
	I0916 23:58:29.786549  722351 pod_ready.go:86] duration metric: took 4.991527ms for pod "coredns-66bc5c9577-mjbz6" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.789148  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793593  722351 pod_ready.go:94] pod "etcd-ha-198834" is "Ready"
	I0916 23:58:29.793614  722351 pod_ready.go:86] duration metric: took 4.43654ms for pod "etcd-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.793622  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797833  722351 pod_ready.go:94] pod "etcd-ha-198834-m02" is "Ready"
	I0916 23:58:29.797856  722351 pod_ready.go:86] duration metric: took 4.228462ms for pod "etcd-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.797864  722351 pod_ready.go:83] waiting for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:29.974055  722351 request.go:683] "Waited before sending request" delay="176.0853ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.173047  722351 request.go:683] "Waited before sending request" delay="193.205885ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.373324  722351 request.go:683] "Waited before sending request" delay="74.260595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198834-m03"
	I0916 23:58:30.573189  722351 request.go:683] "Waited before sending request" delay="196.187075ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.973960  722351 request.go:683] "Waited before sending request" delay="171.749825ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:30.977519  722351 pod_ready.go:94] pod "etcd-ha-198834-m03" is "Ready"
	I0916 23:58:30.977548  722351 pod_ready.go:86] duration metric: took 1.179678858s for pod "etcd-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.172996  722351 request.go:683] "Waited before sending request" delay="195.270589ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0916 23:58:31.176896  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.373184  722351 request.go:683] "Waited before sending request" delay="196.155083ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834"
	I0916 23:58:31.573091  722351 request.go:683] "Waited before sending request" delay="196.292532ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:31.576254  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834" is "Ready"
	I0916 23:58:31.576280  722351 pod_ready.go:86] duration metric: took 399.33205ms for pod "kube-apiserver-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.576288  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.773718  722351 request.go:683] "Waited before sending request" delay="197.34633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m02"
	I0916 23:58:31.973716  722351 request.go:683] "Waited before sending request" delay="196.477986ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:31.978504  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m02" is "Ready"
	I0916 23:58:31.978555  722351 pod_ready.go:86] duration metric: took 402.258846ms for pod "kube-apiserver-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:31.978567  722351 pod_ready.go:83] waiting for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.172964  722351 request.go:683] "Waited before sending request" delay="194.26238ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198834-m03"
	I0916 23:58:32.373491  722351 request.go:683] "Waited before sending request" delay="197.345263ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:32.376525  722351 pod_ready.go:94] pod "kube-apiserver-ha-198834-m03" is "Ready"
	I0916 23:58:32.376552  722351 pod_ready.go:86] duration metric: took 397.9768ms for pod "kube-apiserver-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.573017  722351 request.go:683] "Waited before sending request" delay="196.299414ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0916 23:58:32.577487  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.773969  722351 request.go:683] "Waited before sending request" delay="196.341624ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834"
	I0916 23:58:32.973585  722351 request.go:683] "Waited before sending request" delay="196.346276ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:32.977689  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834" is "Ready"
	I0916 23:58:32.977721  722351 pod_ready.go:86] duration metric: took 400.206125ms for pod "kube-controller-manager-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:32.977735  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.173032  722351 request.go:683] "Waited before sending request" delay="195.180271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m02"
	I0916 23:58:33.373811  722351 request.go:683] "Waited before sending request" delay="197.350717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m02"
	I0916 23:58:33.376722  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m02" is "Ready"
	I0916 23:58:33.376747  722351 pod_ready.go:86] duration metric: took 399.004052ms for pod "kube-controller-manager-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.376756  722351 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.573048  722351 request.go:683] "Waited before sending request" delay="196.186349ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198834-m03"
	I0916 23:58:33.773733  722351 request.go:683] "Waited before sending request" delay="197.347012ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:33.776944  722351 pod_ready.go:94] pod "kube-controller-manager-ha-198834-m03" is "Ready"
	I0916 23:58:33.776972  722351 pod_ready.go:86] duration metric: took 400.209131ms for pod "kube-controller-manager-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:33.973425  722351 request.go:683] "Waited before sending request" delay="196.344301ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0916 23:58:33.977203  722351 pod_ready.go:83] waiting for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.173688  722351 request.go:683] "Waited before sending request" delay="196.345801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tkhn"
	I0916 23:58:34.373026  722351 request.go:683] "Waited before sending request" delay="196.256084ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834"
	I0916 23:58:34.376079  722351 pod_ready.go:94] pod "kube-proxy-5tkhn" is "Ready"
	I0916 23:58:34.376106  722351 pod_ready.go:86] duration metric: took 398.875647ms for pod "kube-proxy-5tkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.376114  722351 pod_ready.go:83] waiting for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:58:34.573402  722351 request.go:683] "Waited before sending request" delay="197.174223ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:34.773022  722351 request.go:683] "Waited before sending request" delay="196.289258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:34.973958  722351 request.go:683] "Waited before sending request" delay="97.260541ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8brp"
	I0916 23:58:35.173637  722351 request.go:683] "Waited before sending request" delay="196.407064ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.573487  722351 request.go:683] "Waited before sending request" delay="193.254271ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0916 23:58:35.973307  722351 request.go:683] "Waited before sending request" delay="93.259111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	W0916 23:58:36.383328  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:38.882062  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:40.882520  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:42.883194  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:45.382843  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:47.882744  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:49.882993  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:51.883265  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:54.383005  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:56.882555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:58:59.382463  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:01.382897  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:03.883583  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:06.382581  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:08.882275  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:11.382224  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:13.382333  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:15.882727  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:18.383800  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:20.882547  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:22.883081  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:25.383627  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:27.882377  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:29.882787  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:31.884042  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:34.382932  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:36.882730  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:38.882959  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:40.883411  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:43.382771  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:45.882938  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:48.381607  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:50.382229  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:52.382889  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:54.882546  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:56.882802  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0916 23:59:58.882939  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:00.883550  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:03.382872  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:05.383021  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:07.384166  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:09.883064  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:11.884141  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:14.383248  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:16.883441  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:18.884438  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:21.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:23.883713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:26.383093  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:28.883552  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:31.383392  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:33.883626  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:35.883823  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:38.383553  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:40.883430  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:43.383026  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:45.883091  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:48.382865  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:50.882713  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:52.882989  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:55.383076  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:57.383555  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:00:59.882704  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:01.883495  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:04.382406  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:06.383424  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:08.883456  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:11.382988  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:13.882379  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:15.883651  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:18.382551  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:20.382997  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:22.882943  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:24.883256  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:27.383660  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:29.882955  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	W0917 00:01:32.383364  722351 pod_ready.go:104] pod "kube-proxy-d8brp" is not "Ready", error: <nil>
	I0917 00:01:34.382530  722351 pod_ready.go:94] pod "kube-proxy-d8brp" is "Ready"
	I0917 00:01:34.382562  722351 pod_ready.go:86] duration metric: took 3m0.006439942s for pod "kube-proxy-d8brp" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.382572  722351 pod_ready.go:83] waiting for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.387645  722351 pod_ready.go:94] pod "kube-proxy-h2fxd" is "Ready"
	I0917 00:01:34.387677  722351 pod_ready.go:86] duration metric: took 5.098826ms for pod "kube-proxy-h2fxd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.390707  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396086  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834" is "Ready"
	I0917 00:01:34.396115  722351 pod_ready.go:86] duration metric: took 5.379692ms for pod "kube-scheduler-ha-198834" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.396126  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400646  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m02" is "Ready"
	I0917 00:01:34.400670  722351 pod_ready.go:86] duration metric: took 4.536355ms for pod "kube-scheduler-ha-198834-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.400680  722351 pod_ready.go:83] waiting for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.577209  722351 request.go:683] "Waited before sending request" delay="174.117357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-198834-m03"
	I0917 00:01:34.580767  722351 pod_ready.go:94] pod "kube-scheduler-ha-198834-m03" is "Ready"
	I0917 00:01:34.580796  722351 pod_ready.go:86] duration metric: took 180.109317ms for pod "kube-scheduler-ha-198834-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:34.580808  722351 pod_ready.go:40] duration metric: took 3m4.808720134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:34.629691  722351 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:01:34.631405  722351 out.go:179] * Done! kubectl is now configured to use "ha-198834" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:25 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50aecbe9f874a63c5159d55af06211bca7903e623f01f1e603f267caaf6da9a7/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:26 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 16 23:57:29 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:29Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.259744438Z" level=info msg="ignoring event" container=fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.275867775Z" level=info msg="ignoring event" container=64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.320870537Z" level=info msg="ignoring event" container=310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 dockerd[1122]: time="2025-09-16T23:57:38.336829292Z" level=info msg="ignoring event" container=a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:38 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:39 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687384709Z" level=info msg="ignoring event" container=11889e34950f849cf7805c6d56f1957ad9d5af727f4810f2da728671398b9f6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.687719889Z" level=info msg="ignoring event" container=1ccdf9f33d5601763297f230a2f6e51620db2ed183e9f4b9179f4ccef579dfac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756623723Z" level=info msg="ignoring event" container=bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 dockerd[1122]: time="2025-09-16T23:57:51.756673284Z" level=info msg="ignoring event" container=870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 16 23:57:51 ha-198834 cri-dockerd[1427]: time="2025-09-16T23:57:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:01:36 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:01:37 ha-198834 cri-dockerd[1427]: time="2025-09-17T00:01:37Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Running             busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	1ccdf9f33d560       52546a367cc9e                                                                                         8 minutes ago       Exited              coredns                   1                   bf6d6b59f2413       coredns-66bc5c9577-mjbz6
	11889e34950f8       52546a367cc9e                                                                                         8 minutes ago       Exited              coredns                   1                   870758f308362       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              8 minutes ago       Running             kindnet-cni               0                   f541f878be896       kindnet-h28vp
	b16ddbbc469c5       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       0                   50aecbe9f874a       storage-provisioner
	2da683f529549       df0860106674d                                                                                         8 minutes ago       Running             kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	8a32665f7e3e4       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     8 minutes ago       Running             kube-vip                  0                   5e4aed7a38e18       kube-vip-ha-198834
	4f536df8f44eb       a0af72f2ec6d6                                                                                         8 minutes ago       Running             kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         8 minutes ago       Running             kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         8 minutes ago       Running             etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         8 minutes ago       Running             kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [11889e34950f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:50107 - 45856 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000165011s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:50484 - 7509 "HINFO IN 4510730421515958928.8365162867102253976. udp 57 false 512" - - 0 5.000096464s
	[ERROR] plugin/errors: 2 4510730421515958928.8365162867102253976. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [1ccdf9f33d56] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49262 - 38359 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000112146s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:51442 - 41164 "HINFO IN 3627584456028797286.2821467008707036685. udp 57 false 512" - - 0 5.000125545s
	[ERROR] plugin/errors: 2 3627584456028797286.2821467008707036685. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	
	
	==> coredns [f4f7ea59034e] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:05:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:53 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3525bf030f0d49c1ab057441433c477c
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m27s
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m27s
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m33s
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m27s
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m26s  kube-proxy       
	  Normal  Starting                 8m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m33s  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m28s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           7m59s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           7m28s  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           18s    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:05:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:04:27 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:04:27 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:04:27 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:04:27 +0000   Tue, 16 Sep 2025 23:57:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef91f8bc46ce44eaa19d30e3c9fcdfd0
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m55s
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m58s
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m51s              kube-proxy       
	  Normal  RegisteredNode           7m54s              node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           7m53s              node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           7m28s              node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 86s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	Name:               ha-198834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:05:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:01:48 +0000   Tue, 16 Sep 2025 23:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-198834-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4e7dc065e4fa49595825994457b8e
	  System UUID:                6f810798-3461-44d1-91c3-d55b483ec842
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l2jn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 etcd-ha-198834-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m22s
	  kube-system                 kindnet-67fn9                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m17s
	  kube-system                 kube-apiserver-ha-198834-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-controller-manager-ha-198834-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-proxy-d8brp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-scheduler-ha-198834-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-vip-ha-198834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        92s    kube-proxy       
	  Normal  RegisteredNode  7m24s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  7m23s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  7m23s  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode  18s    node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"warn","ts":"2025-09-17T00:04:40.875253Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:41.354780Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"warn","ts":"2025-09-17T00:04:44.699428Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5303da23f403d0c1","rtt":"6.194305ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:44.699456Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5303da23f403d0c1","rtt":"525.369µs","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:44.876512Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:44.876572Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:48.878277Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:48.878341Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:49.700531Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5303da23f403d0c1","rtt":"525.369µs","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:49.700557Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5303da23f403d0c1","rtt":"6.194305ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:52.879305Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:52.879360Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:54.700703Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5303da23f403d0c1","rtt":"6.194305ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:54.700781Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5303da23f403d0c1","rtt":"525.369µs","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:56.880444Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:56.880496Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"5303da23f403d0c1","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:59.701505Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5303da23f403d0c1","rtt":"525.369µs","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:04:59.701538Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5303da23f403d0c1","rtt":"6.194305ms","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2025-09-17T00:05:00.057319Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5303da23f403d0c1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-17T00:05:00.057366Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:05:00.057405Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:05:00.060132Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5303da23f403d0c1","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-17T00:05:00.060214Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:05:00.075697Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:05:00.078166Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	
	
	==> kernel <==
	 00:05:51 up  2:48,  0 users,  load average: 2.30, 1.88, 1.35
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:05:10.420386       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:20.419248       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:20.419291       1 main.go:301] handling current node
	I0917 00:05:20.419310       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:20.419317       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:05:20.419528       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:20.419540       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:30.418370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:30.418409       1 main.go:301] handling current node
	I0917 00:05:30.418430       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:30.418437       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:05:30.418629       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:30.418641       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.418896       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:40.419001       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:05:40.419203       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:40.419213       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.419325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:40.419337       1 main.go:301] handling current node
	I0917 00:05:50.419127       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:50.419157       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:50.419382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:50.419397       1 main.go:301] handling current node
	I0917 00:05:50.419409       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:50.419413       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0916 23:58:34.361323       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:36.632983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:02.667929       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:58.976838       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:19.218755       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:15.644338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:43.338268       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:03:18.851078       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58262: use of closed network connection
	E0917 00:03:19.024113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58282: use of closed network connection
	E0917 00:03:19.194951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58306: use of closed network connection
	E0917 00:03:19.388722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58332: use of closed network connection
	E0917 00:03:19.557698       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58342: use of closed network connection
	E0917 00:03:19.744687       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58348: use of closed network connection
	E0917 00:03:19.919836       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58362: use of closed network connection
	E0917 00:03:20.087518       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58376: use of closed network connection
	E0917 00:03:20.254024       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:58398: use of closed network connection
	E0917 00:03:22.459781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48968: use of closed network connection
	E0917 00:03:22.632160       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:48992: use of closed network connection
	E0917 00:03:22.799975       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:49024: use of closed network connection
	I0917 00:03:39.352525       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:47.239226       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0917 00:04:17.941970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	W0917 00:04:47.942711       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0917 00:04:56.921453       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:56.921480       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.036759       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.036813       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5897933c-61bc-4eef-8922-66c37ba68c57(kube-system/kindnet-rwc59) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	E0916 23:58:30.036834       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwc59\": pod kindnet-rwc59 is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-rwc59"
	I0916 23:58:30.038109       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwc59" node="ha-198834-m03"
	E0916 23:58:30.048424       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:30.048665       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4edbf3a1-360c-4f5c-81a3-aa63deb9a159(kube-system/kindnet-lpn5v) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	
	
	==> kubelet <==
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349086    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c51d39f-7e43-461b-a021-13ddf0cb9845-lib-modules\") pod \"kindnet-h28vp\" (UID: \"6c51d39f-7e43-461b-a021-13ddf0cb9845\") " pod="kube-system/kindnet-h28vp"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349103    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-xtables-lock\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.349123    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n49\" (UniqueName: \"kubernetes.io/projected/5edbfebe-2590-4d23-b80e-7496a4e9a5b6-kube-api-access-84n49\") pod \"kube-proxy-5tkhn\" (UID: \"5edbfebe-2590-4d23-b80e-7496a4e9a5b6\") " pod="kube-system/kube-proxy-5tkhn"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650251    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-config-volume\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650425    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5ns\" (UniqueName: \"kubernetes.io/projected/c918625f-be11-44bf-8b82-d4c21b8993d1-kube-api-access-th5ns\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650660    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c918625f-be11-44bf-8b82-d4c21b8993d1-config-volume\") pod \"coredns-66bc5c9577-mjbz6\" (UID: \"c918625f-be11-44bf-8b82-d4c21b8993d1\") " pod="kube-system/coredns-66bc5c9577-mjbz6"
	Sep 16 23:57:24 ha-198834 kubelet[2468]: I0916 23:57:24.650701    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhmb4\" (UniqueName: \"kubernetes.io/projected/6f279fd8-dd3c-49a5-863d-a53124ecf1f5-kube-api-access-xhmb4\") pod \"coredns-66bc5c9577-5wx4k\" (UID: \"6f279fd8-dd3c-49a5-863d-a53124ecf1f5\") " pod="kube-system/coredns-66bc5c9577-5wx4k"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.014693    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tkhn" podStartSLOduration=1.014665687 podStartE2EDuration="1.014665687s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:24.932304069 +0000 UTC m=+6.176281069" watchObservedRunningTime="2025-09-16 23:57:25.014665687 +0000 UTC m=+6.258642688"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.042478    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.046332    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f541f878be89694936d8219d8e7fc682a8a169d9edf6417f067927aa4748c0ae"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153403    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrvp\" (UniqueName: \"kubernetes.io/projected/6b6f64f3-2647-4e13-be41-47fcc6111f3e-kube-api-access-jqrvp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:25 ha-198834 kubelet[2468]: I0916 23:57:25.153458    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b6f64f3-2647-4e13-be41-47fcc6111f3e-tmp\") pod \"storage-provisioner\" (UID: \"6b6f64f3-2647-4e13-be41-47fcc6111f3e\") " pod="kube-system/storage-provisioner"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098005    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5wx4k" podStartSLOduration=2.097979793 podStartE2EDuration="2.097979793s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.086842117 +0000 UTC m=+7.330819118" watchObservedRunningTime="2025-09-16 23:57:26.097979793 +0000 UTC m=+7.341956793"
	Sep 16 23:57:26 ha-198834 kubelet[2468]: I0916 23:57:26.098130    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098124108 podStartE2EDuration="1.098124108s" podCreationTimestamp="2025-09-16 23:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.097817254 +0000 UTC m=+7.341794256" watchObservedRunningTime="2025-09-16 23:57:26.098124108 +0000 UTC m=+7.342101108"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.159968    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mjbz6" podStartSLOduration=5.159946005 podStartE2EDuration="5.159946005s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 23:57:26.124330373 +0000 UTC m=+7.368307374" watchObservedRunningTime="2025-09-16 23:57:29.159946005 +0000 UTC m=+10.403923006"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.193262    2468 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 23:57:29 ha-198834 kubelet[2468]: I0916 23:57:29.194144    2468 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 23:57:30 ha-198834 kubelet[2468]: I0916 23:57:30.158085    2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h28vp" podStartSLOduration=1.342825895 podStartE2EDuration="6.158061718s" podCreationTimestamp="2025-09-16 23:57:24 +0000 UTC" firstStartedPulling="2025-09-16 23:57:24.955662014 +0000 UTC m=+6.199639012" lastFinishedPulling="2025-09-16 23:57:29.770897851 +0000 UTC m=+11.014874835" observedRunningTime="2025-09-16 23:57:30.157595407 +0000 UTC m=+11.401572408" watchObservedRunningTime="2025-09-16 23:57:30.158061718 +0000 UTC m=+11.402038720"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.230434    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310e06fbf27552640b0b3a8e13bad59df698b55eb4f3fb6f18b12db35aa6c730"
	Sep 16 23:57:39 ha-198834 kubelet[2468]: I0916 23:57:39.258365    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9537db0dd134f5d54858edc93311297fbfcf0df7c8779512025918dcaa8fc3d"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370599    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.370662    2468 scope.go:117] "RemoveContainer" containerID="fde474653f398ec39c3db826d18aef42dd96b2e13f969de6637124df51136f75"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.388953    2468 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6d6b59f24132f5ce3eeb0feb770948fcab77227dc0f50c12a706b85a62d850"
	Sep 16 23:57:52 ha-198834 kubelet[2468]: I0916 23:57:52.389033    2468 scope.go:117] "RemoveContainer" containerID="64da07c62c4a9952e882760e7e5b5c04eda9df5e202ce0e9c2bf6fc892deeeea"
	Sep 17 00:01:35 ha-198834 kubelet[2468]: I0917 00:01:35.703764    2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5r6\" (UniqueName: \"kubernetes.io/projected/a7cf1231-2a12-4247-a01a-2c2f02f5f2d8-kube-api-access-vt5r6\") pod \"busybox-7b57f96db7-pstjp\" (UID: \"a7cf1231-2a12-4247-a01a-2c2f02f5f2d8\") " pod="default/busybox-7b57f96db7-pstjp"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (88.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (503.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 stop --alsologtostderr -v 5
E0917 00:06:13.672094  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 stop --alsologtostderr -v 5: (32.285964777s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 start --wait true --alsologtostderr -v 5
E0917 00:06:41.372101  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:36.251105  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:11:13.668184  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:11:59.326698  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 start --wait true --alsologtostderr -v 5: exit status 80 (7m48.802122715s)

                                                
                                                
-- stdout --
	* [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...
	* Enabled addons: 
	
	* Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-198834-m04" worker node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:06:25.424279  767194 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:06:25.424573  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424581  767194 out.go:374] Setting ErrFile to fd 2...
	I0917 00:06:25.424586  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424775  767194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:06:25.425286  767194 out.go:368] Setting JSON to false
	I0917 00:06:25.426324  767194 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10118,"bootTime":1758057468,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:06:25.426427  767194 start.go:140] virtualization: kvm guest
	I0917 00:06:25.428578  767194 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:06:25.430211  767194 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:06:25.430246  767194 notify.go:220] Checking for updates...
	I0917 00:06:25.432570  767194 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:06:25.433820  767194 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:25.435087  767194 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:06:25.436546  767194 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:06:25.437859  767194 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:06:25.439704  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:25.439894  767194 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:06:25.464302  767194 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:06:25.464438  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.516697  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.50681521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.516812  767194 docker.go:318] overlay module found
	I0917 00:06:25.518746  767194 out.go:179] * Using the docker driver based on existing profile
	I0917 00:06:25.519979  767194 start.go:304] selected driver: docker
	I0917 00:06:25.519997  767194 start.go:918] validating driver "docker" against &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.520122  767194 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:06:25.520208  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.572516  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.563271649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.573652  767194 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:06:25.573697  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:25.573785  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:25.573870  767194 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.576437  767194 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0917 00:06:25.577616  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:25.578818  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:25.579785  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:25.579821  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:25.579826  767194 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:06:25.579871  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:25.579979  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:25.579993  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:25.580143  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.599791  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:25.599812  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:25.599832  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:25.599862  767194 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:25.599948  767194 start.go:364] duration metric: took 62.805µs to acquireMachinesLock for "ha-198834"
	I0917 00:06:25.599973  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:25.599982  767194 fix.go:54] fixHost starting: 
	I0917 00:06:25.600220  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.616766  767194 fix.go:112] recreateIfNeeded on ha-198834: state=Stopped err=<nil>
	W0917 00:06:25.616794  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:25.618968  767194 out.go:252] * Restarting existing docker container for "ha-198834" ...
	I0917 00:06:25.619043  767194 cli_runner.go:164] Run: docker start ha-198834
	I0917 00:06:25.855847  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.873957  767194 kic.go:430] container "ha-198834" state is running.
	I0917 00:06:25.874450  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:25.892189  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.892415  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:25.892480  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:25.912009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:25.912263  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:25.912277  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:25.912887  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59988->127.0.0.1:32813: read: connection reset by peer
	I0917 00:06:29.050047  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.050078  767194 ubuntu.go:182] provisioning hostname "ha-198834"
	I0917 00:06:29.050148  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.067712  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.067965  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.067980  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0917 00:06:29.215970  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.216043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.234106  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.234329  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.234345  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:29.370392  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:29.370431  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:29.370460  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:29.370469  767194 provision.go:84] configureAuth start
	I0917 00:06:29.370526  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:29.387543  767194 provision.go:143] copyHostCerts
	I0917 00:06:29.387579  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387610  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:29.387629  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387709  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:29.387817  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387848  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:29.387857  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387927  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:29.388004  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388027  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:29.388036  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388076  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:29.388269  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0917 00:06:29.680052  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:29.680112  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:29.680162  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.697396  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:29.794745  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:29.794807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:06:29.818846  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:29.818935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:29.843109  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:29.843177  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:06:29.867681  767194 provision.go:87] duration metric: took 497.192274ms to configureAuth
	I0917 00:06:29.867713  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:29.867938  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:29.867986  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.885190  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.885426  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.885443  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:30.020557  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:30.020583  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:30.020695  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:30.020755  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.038274  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.038492  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.038556  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:30.187120  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:30.187195  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.205293  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.205508  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.205531  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:30.346335  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:30.346367  767194 machine.go:96] duration metric: took 4.453936173s to provisionDockerMachine
	I0917 00:06:30.346383  767194 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0917 00:06:30.346398  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:30.346454  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:30.346492  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.363443  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.460028  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:30.463596  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:30.463625  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:30.463633  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:30.463639  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:30.463650  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:30.463700  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:30.463783  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:30.463796  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:30.463882  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:30.472864  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:30.497731  767194 start.go:296] duration metric: took 151.329262ms for postStartSetup
	I0917 00:06:30.497818  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:30.497853  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.515030  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.607057  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:30.611598  767194 fix.go:56] duration metric: took 5.011609188s for fixHost
	I0917 00:06:30.611632  767194 start.go:83] releasing machines lock for "ha-198834", held for 5.011665153s
	I0917 00:06:30.611691  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:30.629667  767194 ssh_runner.go:195] Run: cat /version.json
	I0917 00:06:30.629691  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:30.629719  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.629746  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.648073  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.648707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.812105  767194 ssh_runner.go:195] Run: systemctl --version
	I0917 00:06:30.816966  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:30.821509  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:30.840562  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:30.840635  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:30.850098  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:30.850133  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:30.850174  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:30.850289  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:30.867420  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:30.877948  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:30.888651  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:30.888731  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:30.899002  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.909052  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:30.918885  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.928779  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:30.938579  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:30.949499  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:30.960372  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:30.971253  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:30.980460  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:30.989781  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.059433  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:31.134046  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:31.134104  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:31.134189  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:31.147025  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.158451  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:31.177473  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.189232  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:31.201624  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:31.218917  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:31.222505  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:31.231136  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:31.249756  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:31.318828  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:31.386194  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:31.386293  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:31.405146  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:31.416620  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.483436  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:06:32.289053  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:06:32.300858  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:06:32.312042  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:06:32.323965  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.335721  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:06:32.399500  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:06:32.463504  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.532114  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:06:32.554184  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:06:32.565656  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.632393  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:06:32.706727  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.718700  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:06:32.718779  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:06:32.722502  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:06:32.722558  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:06:32.725864  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:06:32.759463  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:06:32.759531  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.784419  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.811577  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:06:32.811654  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:06:32.828274  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:06:32.832384  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:32.844198  767194 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:06:32.844338  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:32.844391  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.866962  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.866988  767194 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:06:32.867045  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.888238  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.888260  767194 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:06:32.888271  767194 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0917 00:06:32.888408  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:06:32.888467  767194 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:06:32.937957  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:32.937987  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:32.937999  767194 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:06:32.938023  767194 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:06:32.938138  767194 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:06:32.938157  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:06:32.938196  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:06:32.951493  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:32.951590  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:06:32.951639  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:06:32.960559  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:06:32.960633  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:06:32.969398  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0917 00:06:32.986997  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:06:33.005302  767194 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0917 00:06:33.023722  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:06:33.042510  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:06:33.046353  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:33.057738  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:33.121569  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:06:33.146613  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0917 00:06:33.146635  767194 certs.go:194] generating shared ca certs ...
	I0917 00:06:33.146655  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.146819  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:06:33.146861  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:06:33.146872  767194 certs.go:256] generating profile certs ...
	I0917 00:06:33.147007  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:06:33.147039  767194 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731
	I0917 00:06:33.147053  767194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:06:33.244684  767194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 ...
	I0917 00:06:33.244725  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731: {Name:mkeb1335a8dc05724d212e3f3c2f54f358e1623c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.244951  767194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 ...
	I0917 00:06:33.244976  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731: {Name:mkb539de1a460dc24807c303f56b400b0045d38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.245116  767194 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0917 00:06:33.245304  767194 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0917 00:06:33.245488  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:06:33.245509  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:06:33.245530  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:06:33.245548  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:06:33.245569  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:06:33.245589  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:06:33.245603  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:06:33.245616  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:06:33.245631  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:06:33.245698  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:06:33.245742  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:06:33.245759  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:06:33.245789  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:06:33.245819  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:06:33.245852  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:06:33.245931  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:33.245973  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.246001  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.246019  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.246713  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:06:33.280935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:06:33.310873  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:06:33.335758  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:06:33.364379  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:06:33.390832  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:06:33.415955  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:06:33.440057  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:06:33.463203  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:06:33.486818  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:06:33.510617  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:06:33.534829  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:06:33.553186  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:06:33.558602  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:06:33.568556  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572286  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572354  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.579085  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:06:33.588476  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:06:33.598074  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601602  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601665  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.608370  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:06:33.617493  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:06:33.626827  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630358  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630412  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.637101  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:06:33.645992  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:06:33.649484  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:06:33.657172  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:06:33.664432  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:06:33.673579  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:06:33.681621  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:06:33.690060  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:06:33.697708  767194 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:33.697865  767194 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:06:33.723793  767194 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:06:33.738005  767194 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:06:33.738035  767194 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:06:33.738100  767194 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:06:33.751261  767194 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:33.751774  767194 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-198834" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.751968  767194 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-661878/kubeconfig needs updating (will repair): [kubeconfig missing "ha-198834" cluster setting kubeconfig missing "ha-198834" context setting]
	I0917 00:06:33.752337  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.752804  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:06:33.753302  767194 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:06:33.753319  767194 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:06:33.753323  767194 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:06:33.753327  767194 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:06:33.753332  767194 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:06:33.753384  767194 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:06:33.753793  767194 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:06:33.766494  767194 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:06:33.766524  767194 kubeadm.go:593] duration metric: took 28.480766ms to restartPrimaryControlPlane
	I0917 00:06:33.766536  767194 kubeadm.go:394] duration metric: took 68.837067ms to StartCluster
	I0917 00:06:33.766560  767194 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.766635  767194 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.767596  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.767874  767194 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:06:33.767916  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:06:33.767929  767194 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:06:33.768219  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.771075  767194 out.go:179] * Enabled addons: 
	I0917 00:06:33.772321  767194 addons.go:514] duration metric: took 4.387344ms for enable addons: enabled=[]
	I0917 00:06:33.772363  767194 start.go:246] waiting for cluster config update ...
	I0917 00:06:33.772375  767194 start.go:255] writing updated cluster config ...
	I0917 00:06:33.774041  767194 out.go:203] 
	I0917 00:06:33.775488  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.775605  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.777754  767194 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0917 00:06:33.779232  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:33.780466  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:33.781663  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:33.781696  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:33.781785  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:33.781814  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:33.781827  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:33.782011  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.808184  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:33.808211  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:33.808230  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:33.808264  767194 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:33.808324  767194 start.go:364] duration metric: took 41.8µs to acquireMachinesLock for "ha-198834-m02"
	I0917 00:06:33.808349  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:33.808357  767194 fix.go:54] fixHost starting: m02
	I0917 00:06:33.808657  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:33.830576  767194 fix.go:112] recreateIfNeeded on ha-198834-m02: state=Stopped err=<nil>
	W0917 00:06:33.830617  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:33.832420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m02" ...
	I0917 00:06:33.832507  767194 cli_runner.go:164] Run: docker start ha-198834-m02
	I0917 00:06:34.153635  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:34.174085  767194 kic.go:430] container "ha-198834-m02" state is running.
	I0917 00:06:34.174485  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:34.193433  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:34.193710  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:34.193778  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:34.214780  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:34.215097  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:34.215113  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:34.215694  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38290->127.0.0.1:32818: read: connection reset by peer
	I0917 00:06:37.354066  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.354095  767194 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0917 00:06:37.354152  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.371082  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.371306  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.371320  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0917 00:06:37.519883  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.519999  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.537320  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.537534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.537550  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:37.672583  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:37.672613  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:37.672631  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:37.672648  767194 provision.go:84] configureAuth start
	I0917 00:06:37.672696  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:37.689646  767194 provision.go:143] copyHostCerts
	I0917 00:06:37.689686  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689726  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:37.689739  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689816  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:37.689949  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.689980  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:37.689988  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.690037  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:37.690112  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690144  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:37.690151  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690194  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:37.690275  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0917 00:06:37.816978  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:37.817061  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:37.817110  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.833876  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:37.931727  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:37.931807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:37.957434  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:37.957498  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:06:37.982656  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:37.982715  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:06:38.008383  767194 provision.go:87] duration metric: took 335.719749ms to configureAuth
	I0917 00:06:38.008424  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:38.008674  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:38.008734  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.025557  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.025785  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.025797  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:38.163170  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:38.163196  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:38.163371  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:38.163449  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.185210  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.185534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.185648  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:38.356034  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:38.356160  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.375350  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.375668  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.375699  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:50.199822  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-17 00:04:28.867992287 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-17 00:06:38.349897889 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 00:06:50.199856  767194 machine.go:96] duration metric: took 16.006130584s to provisionDockerMachine
	I0917 00:06:50.199874  767194 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0917 00:06:50.199898  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:50.199991  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:50.200037  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.231846  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.352925  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:50.364867  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:50.365109  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:50.365165  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:50.365182  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:50.365203  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:50.365613  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:50.365774  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:50.365791  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:50.366045  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:50.388970  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:50.439295  767194 start.go:296] duration metric: took 239.401963ms for postStartSetup
	I0917 00:06:50.439403  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:50.439460  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.471007  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.602680  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:50.622483  767194 fix.go:56] duration metric: took 16.814116597s for fixHost
	I0917 00:06:50.622519  767194 start.go:83] releasing machines lock for "ha-198834-m02", held for 16.814180436s
	I0917 00:06:50.622611  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:50.653586  767194 out.go:179] * Found network options:
	I0917 00:06:50.656159  767194 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:06:50.657611  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:06:50.657663  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:06:50.657748  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:50.657820  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.658056  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:50.658130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.695981  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.696302  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.813556  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:50.945454  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:50.945549  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:50.963173  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:50.963207  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:50.963244  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:50.963393  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:51.026654  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:51.062543  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:51.084179  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:51.084245  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:51.116429  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.134652  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:51.149737  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.178368  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:51.192765  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:51.210476  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:51.239805  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:51.263323  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:51.278110  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:51.292395  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:51.494387  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:51.834314  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:51.834371  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:51.834425  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:51.865409  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.888868  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:51.925439  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.950993  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:51.977155  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:52.018179  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:52.023424  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:52.036424  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:52.064244  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:52.246651  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:52.441476  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:52.441527  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:52.483989  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:52.501544  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:52.690125  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:09.204303  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.51413344s)
	I0917 00:07:09.204382  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:09.225679  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:09.253125  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:09.286728  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:09.309012  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:09.445797  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:09.588443  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.726437  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:09.759063  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:09.787528  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.918052  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:10.070248  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:10.091720  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:10.091835  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:10.104106  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:10.104210  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:10.109447  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:10.164469  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:10.164546  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.206116  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.251181  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:10.252538  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:10.254028  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:10.280282  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:10.286408  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:10.315665  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:10.317007  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:10.317340  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:10.349566  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:10.349878  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0917 00:07:10.349892  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:10.349931  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:10.350083  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:10.350139  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:10.350152  767194 certs.go:256] generating profile certs ...
	I0917 00:07:10.350273  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:10.350356  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.11b60fbb
	I0917 00:07:10.350412  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:10.350424  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:10.350443  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:10.350459  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:10.350474  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:10.350489  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:10.350505  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:10.350519  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:10.350532  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:10.350613  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:10.350656  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:10.350669  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:10.350702  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:10.350734  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:10.350774  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:10.350834  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:10.350874  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:10.350896  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:10.350924  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:10.350992  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:10.376726  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:10.493359  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:10.503886  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:10.534629  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:10.546504  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:10.568315  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:10.575486  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:10.605107  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:10.617021  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:10.651278  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:10.670568  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:10.696371  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:10.704200  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:10.732773  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:10.783862  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:10.831455  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:10.878503  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:10.928036  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:10.987893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:11.056094  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:11.123465  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:11.173229  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:11.218880  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:11.260399  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:11.310489  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:11.343030  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:11.378463  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:11.409826  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:11.456579  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:11.506523  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:11.540827  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:11.586318  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:11.600141  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:11.619035  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.625867  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.626054  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.639263  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:11.653785  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:11.672133  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681092  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681171  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.692463  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:11.707982  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:11.728502  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735225  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735287  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.745817  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:11.762496  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:11.768239  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:11.782100  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:11.796792  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:11.807595  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:11.818618  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:11.828824  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:11.839591  767194 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0917 00:07:11.839780  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:11.839824  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:11.839873  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:11.860859  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:11.861012  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:11.861079  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:11.879762  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:11.879865  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:11.896560  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:11.928442  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:11.958532  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:11.988805  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:11.997336  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:12.017582  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.177262  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.199102  767194 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:12.199621  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:12.202718  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:12.204066  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.356191  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.380335  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:12.380472  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:12.380985  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184442  767194 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0917 00:07:13.184486  767194 node_ready.go:38] duration metric: took 803.457553ms for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184510  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:13.184576  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:13.200500  767194 api_server.go:72] duration metric: took 1.001333458s to wait for apiserver process to appear ...
	I0917 00:07:13.200532  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:07:13.200555  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:07:13.213606  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:07:13.214727  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:07:13.214764  767194 api_server.go:131] duration metric: took 14.223116ms to wait for apiserver health ...
	I0917 00:07:13.214777  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:07:13.256193  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:07:13.256242  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256252  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256264  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.256270  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.256275  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.256280  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.256284  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.256289  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.256293  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.256298  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.256303  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.256308  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.256313  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.256318  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.256322  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.256327  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.256333  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.256338  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.256343  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.256347  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.256354  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:07:13.256358  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.256363  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.256369  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.256384  767194 system_pods.go:74] duration metric: took 41.59977ms to wait for pod list to return data ...
	I0917 00:07:13.256395  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:07:13.264291  767194 default_sa.go:45] found service account: "default"
	I0917 00:07:13.264324  767194 default_sa.go:55] duration metric: took 7.92079ms for default service account to be created ...
	I0917 00:07:13.264336  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:07:13.276453  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:07:13.276550  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276578  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276615  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.276644  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.276660  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.276676  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.276691  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.276720  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.276746  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.276763  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.276778  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.276793  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.276822  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.276857  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.276872  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.276885  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.277012  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.277120  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.277142  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.277175  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.277203  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:07:13.277208  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.277217  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.277225  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.277236  767194 system_pods.go:126] duration metric: took 12.891282ms to wait for k8s-apps to be running ...
	I0917 00:07:13.277249  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:07:13.277375  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:07:13.297819  767194 system_svc.go:56] duration metric: took 20.558975ms WaitForService to wait for kubelet
	I0917 00:07:13.297852  767194 kubeadm.go:578] duration metric: took 1.098690951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:07:13.297875  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:07:13.307482  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307521  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307539  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307677  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307701  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307723  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307753  767194 node_conditions.go:105] duration metric: took 9.872298ms to run NodePressure ...
	I0917 00:07:13.307786  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:07:13.307825  767194 start.go:255] writing updated cluster config ...
	I0917 00:07:13.310261  767194 out.go:203] 
	I0917 00:07:13.313110  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:13.313320  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.314968  767194 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0917 00:07:13.316602  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:07:13.318003  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:07:13.319806  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:07:13.319840  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:07:13.319992  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:07:13.320012  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:07:13.320251  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.320825  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:07:13.347419  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:07:13.347438  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:07:13.347454  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:07:13.347496  767194 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:07:13.347565  767194 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "ha-198834-m03"
	I0917 00:07:13.347590  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:07:13.347599  767194 fix.go:54] fixHost starting: m03
	I0917 00:07:13.347818  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.369824  767194 fix.go:112] recreateIfNeeded on ha-198834-m03: state=Stopped err=<nil>
	W0917 00:07:13.369863  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:07:13.371420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m03" ...
	I0917 00:07:13.371502  767194 cli_runner.go:164] Run: docker start ha-198834-m03
	I0917 00:07:13.655593  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.677720  767194 kic.go:430] container "ha-198834-m03" state is running.
	I0917 00:07:13.678397  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:13.698869  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.699223  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:07:13.699297  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:13.720009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:13.720402  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:13.720423  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:07:13.721130  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37410->127.0.0.1:32823: read: connection reset by peer
	I0917 00:07:16.888288  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:16.888424  767194 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0917 00:07:16.888511  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:16.916245  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:16.916715  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:16.916774  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0917 00:07:17.072762  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:17.072849  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.090683  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.090891  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.090926  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:07:17.226615  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.226655  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:07:17.226674  767194 ubuntu.go:190] setting up certificates
	I0917 00:07:17.226686  767194 provision.go:84] configureAuth start
	I0917 00:07:17.226737  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:17.243928  767194 provision.go:143] copyHostCerts
	I0917 00:07:17.243981  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244016  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:07:17.244028  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244117  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:07:17.244225  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244251  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:07:17.244261  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244308  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:07:17.244380  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244407  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:07:17.244416  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244453  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:07:17.244535  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0917 00:07:17.292018  767194 provision.go:177] copyRemoteCerts
	I0917 00:07:17.292080  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:07:17.292117  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.308563  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:17.405828  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:07:17.405893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:07:17.431262  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:07:17.431334  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:07:17.455746  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:07:17.455816  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:07:17.480475  767194 provision.go:87] duration metric: took 253.772124ms to configureAuth
	I0917 00:07:17.480509  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:07:17.480714  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:17.480758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.497376  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.497580  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.497596  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:07:17.633636  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:07:17.633662  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:07:17.633805  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:07:17.633874  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.651414  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.651681  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.651795  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:07:17.804026  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:07:17.804120  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.820842  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.821111  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.821138  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:07:17.969667  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.969696  767194 machine.go:96] duration metric: took 4.270454946s to provisionDockerMachine
	I0917 00:07:17.969711  767194 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0917 00:07:17.969724  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:07:17.969792  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:07:17.969841  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.990397  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.094261  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:07:18.098350  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:07:18.098388  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:07:18.098399  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:07:18.098407  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:07:18.098437  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:07:18.098499  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:07:18.098595  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:07:18.098610  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:07:18.098725  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:07:18.109219  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:18.136620  767194 start.go:296] duration metric: took 166.894782ms for postStartSetup
	I0917 00:07:18.136712  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:07:18.136758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.154707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.253452  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:07:18.258750  767194 fix.go:56] duration metric: took 4.91114427s for fixHost
	I0917 00:07:18.258774  767194 start.go:83] releasing machines lock for "ha-198834-m03", held for 4.911195885s
	I0917 00:07:18.258832  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:18.277160  767194 out.go:179] * Found network options:
	I0917 00:07:18.278351  767194 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:07:18.279348  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279378  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279406  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279425  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:07:18.279508  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:07:18.279557  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.279572  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:07:18.279629  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.297009  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.297357  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.461356  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:07:18.481814  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:07:18.481895  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:07:18.491087  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:07:18.491123  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.491159  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.491286  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:18.508046  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:07:18.518506  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:07:18.528724  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:07:18.528783  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:07:18.538901  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.548523  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:07:18.558495  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.568810  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:07:18.578635  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:07:18.588831  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:07:18.599026  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:07:18.608953  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:07:18.617676  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:07:18.629264  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:18.768747  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:07:18.967427  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.967485  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.967537  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:07:18.989293  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.005620  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:07:19.028890  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.040741  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:07:19.052468  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:19.069901  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:07:19.074018  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:07:19.084197  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:07:19.103723  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:07:19.235291  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:07:19.383050  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:07:19.383098  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:07:19.407054  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:07:19.421996  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:19.555630  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:50.623187  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.067490574s)
	I0917 00:07:50.623303  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:50.641030  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:50.658671  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:50.689413  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:50.703046  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:50.803170  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:50.901724  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:50.993561  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:51.017479  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:51.029545  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:51.119869  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:51.204520  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:51.216519  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:51.216591  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:51.220572  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:51.220624  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:51.224162  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:51.260602  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:51.260663  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.285759  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.312885  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:51.314109  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:51.315183  767194 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:07:51.316372  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:51.333621  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:51.337646  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:51.349463  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:51.349718  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:51.350027  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:51.366938  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:51.367221  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0917 00:07:51.367234  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:51.367257  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:51.367403  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:51.367473  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:51.367486  767194 certs.go:256] generating profile certs ...
	I0917 00:07:51.367595  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:51.367661  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0917 00:07:51.367716  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:51.367732  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:51.367752  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:51.367770  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:51.367789  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:51.367807  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:51.367832  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:51.367852  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:51.367869  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:51.367977  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:51.368020  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:51.368035  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:51.368076  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:51.368123  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:51.368156  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:51.368219  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:51.368269  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:51.368293  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:51.368313  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:51.368380  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:51.385113  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:51.473207  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:51.477858  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:51.490558  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:51.494138  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:51.507164  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:51.510845  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:51.523649  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:51.527311  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:51.539889  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:51.543488  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:51.557348  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:51.561022  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:51.575140  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:51.600746  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:51.626754  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:51.652660  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:51.677825  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:51.705137  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:51.740575  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:51.782394  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:51.821612  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:51.869185  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:51.909129  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:51.951856  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:51.980155  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:52.009170  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:52.038558  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:52.065379  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:52.093597  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:52.126589  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:52.157625  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:52.165683  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:52.182691  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188710  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188782  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.198794  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:52.213539  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:52.228292  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233558  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233622  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.242917  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:52.253428  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:52.264188  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268190  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268248  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.275453  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:52.285681  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:52.289640  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:52.297959  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:52.305434  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:52.313682  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:52.322656  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:52.330627  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:52.338015  767194 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0917 00:07:52.338141  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:52.338171  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:52.338230  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:52.353235  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:52.353321  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:52.353383  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:52.364085  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:52.364180  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:52.374489  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:52.394684  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:52.414928  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:52.435081  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:52.439302  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:52.451073  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.596707  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.610374  767194 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:52.610770  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:52.613091  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:52.614497  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.748599  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.767051  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:52.767139  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:52.767427  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771001  767194 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0917 00:07:52.771035  767194 node_ready.go:38] duration metric: took 3.579349ms for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771053  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:52.771108  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.272115  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.771243  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.271592  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.772153  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.272098  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.771893  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.271870  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.771931  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.271565  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.771663  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.272256  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.772138  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.272247  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.772002  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.271313  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.771538  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.272173  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.287212  767194 api_server.go:72] duration metric: took 8.676772616s to wait for apiserver process to appear ...
	I0917 00:08:01.287241  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:08:01.287263  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:08:01.291600  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:08:01.292548  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:08:01.292573  767194 api_server.go:131] duration metric: took 5.323927ms to wait for apiserver health ...
	I0917 00:08:01.292583  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:08:01.299296  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:08:01.299329  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.299337  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.299343  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.299349  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.299354  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.299360  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.299374  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.299383  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.299391  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.299396  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.299405  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.299410  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.299417  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.299426  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.299434  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.299440  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299452  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299462  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.299474  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.299483  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.299488  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.299495  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.299500  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.299507  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.299515  767194 system_pods.go:74] duration metric: took 6.92458ms to wait for pod list to return data ...
	I0917 00:08:01.299527  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:08:01.302268  767194 default_sa.go:45] found service account: "default"
	I0917 00:08:01.302290  767194 default_sa.go:55] duration metric: took 2.753628ms for default service account to be created ...
	I0917 00:08:01.302298  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:08:01.308262  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:08:01.308290  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.308297  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.308303  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.308308  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.308313  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.308318  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.308328  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.308338  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.308345  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.308353  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.308358  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.308366  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.308372  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.308382  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.308387  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.308399  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308406  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308416  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.308422  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.308430  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.308437  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.308444  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.308450  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.308457  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.308466  767194 system_pods.go:126] duration metric: took 6.162144ms to wait for k8s-apps to be running ...
	I0917 00:08:01.308477  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:08:01.308531  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:08:01.321442  767194 system_svc.go:56] duration metric: took 12.955822ms WaitForService to wait for kubelet
	I0917 00:08:01.321471  767194 kubeadm.go:578] duration metric: took 8.711043606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:08:01.321497  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:08:01.324862  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324889  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324932  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324940  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324955  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324965  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324975  767194 node_conditions.go:105] duration metric: took 3.472737ms to run NodePressure ...
	I0917 00:08:01.324991  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:08:01.325019  767194 start.go:255] writing updated cluster config ...
	I0917 00:08:01.327247  767194 out.go:203] 
	I0917 00:08:01.328726  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:08:01.328814  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.330445  767194 out.go:179] * Starting "ha-198834-m04" worker node in "ha-198834" cluster
	I0917 00:08:01.331747  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:08:01.333143  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:08:01.334280  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:08:01.334304  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:08:01.334314  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:08:01.334421  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:08:01.334508  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:08:01.334619  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.354767  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:08:01.354793  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:08:01.354813  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:08:01.354846  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:08:01.354978  767194 start.go:364] duration metric: took 110.48µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:08:01.355008  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:08:01.355019  767194 fix.go:54] fixHost starting: m04
	I0917 00:08:01.355235  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.371130  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:08:01.371158  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:08:01.373077  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:08:01.373153  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:08:01.641002  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.659099  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:08:01.659469  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:08:01.678005  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.678237  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:08:01.678290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:08:01.696742  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:08:01.697129  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0917 00:08:01.697150  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:08:01.697961  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37464->127.0.0.1:32828: read: connection reset by peer
	I0917 00:08:04.699300  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:07.701796  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:10.702633  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:13.704979  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:16.706261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:19.708223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:22.709325  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:25.709823  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:28.712117  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:31.713282  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:34.713692  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:37.714198  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:40.714526  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:43.715144  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:46.716332  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:49.718233  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:52.719842  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:55.720892  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:58.723145  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:01.724306  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:04.725156  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:07.727215  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:10.727548  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:13.729824  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:16.730195  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:19.732187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:22.733240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:25.734470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:28.736754  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:31.737738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:34.738212  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:37.740201  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:40.740629  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:43.742209  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:46.743230  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:49.743812  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:52.745547  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:55.746133  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:58.747347  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:01.748104  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:04.749384  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:07.751199  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:10.751605  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:13.754005  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:16.755405  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:19.757166  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:22.759220  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:25.760523  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:28.762825  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:31.764155  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:34.765318  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:37.767696  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:40.768111  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:43.768686  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:46.769636  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:49.771919  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:52.774246  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:55.774600  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:58.776146  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:11:01.777005  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:11:01.777043  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:11:01.777121  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.795827  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.795926  767194 machine.go:96] duration metric: took 3m0.117674387s to provisionDockerMachine
	I0917 00:11:01.796029  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:01.796065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.813326  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.813470  767194 retry.go:31] will retry after 152.729446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:01.966929  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.985775  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.985883  767194 retry.go:31] will retry after 397.218731ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:02.383496  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:02.403581  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:02.403703  767194 retry.go:31] will retry after 638.635672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.042529  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.059560  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.059686  767194 retry.go:31] will retry after 704.769086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.765290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.783784  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:03.783946  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:03.783981  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.784042  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:11:03.784097  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.801467  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.801578  767194 retry.go:31] will retry after 205.36367ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.008065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.026061  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.026199  767194 retry.go:31] will retry after 386.510214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.413871  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.432422  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.432542  767194 retry.go:31] will retry after 536.785381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.970143  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.987140  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.987259  767194 retry.go:31] will retry after 666.945417ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.654998  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:05.677613  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:05.677742  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677760  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677774  767194 fix.go:56] duration metric: took 3m4.322754949s for fixHost
	I0917 00:11:05.677787  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m4.322792335s
	W0917 00:11:05.677805  767194 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677949  767194 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677962  767194 start.go:729] Will try again in 5 seconds ...
	I0917 00:11:10.678811  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:11:10.678978  767194 start.go:364] duration metric: took 125.961µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:11:10.679012  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:11:10.679023  767194 fix.go:54] fixHost starting: m04
	I0917 00:11:10.679331  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.696334  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:11:10.696364  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:11:10.698674  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:11:10.698775  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:11:10.958441  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.976858  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:11:10.977249  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:11:10.996019  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:11:10.996308  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:11:10.996391  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:11:11.014622  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:11:11.014851  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0917 00:11:11.014862  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:11:11.015528  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40006->127.0.0.1:32833: read: connection reset by peer
	I0917 00:11:14.016664  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:17.018409  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:20.020719  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:23.023197  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:26.024253  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:29.026231  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:32.027234  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:35.028559  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:38.030180  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:41.030858  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:44.031976  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:47.032386  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:50.034183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:53.036585  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:56.037322  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:59.039174  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:02.040643  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:05.042141  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:08.044484  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:11.044866  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:14.045168  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:17.046169  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:20.047738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:23.049217  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:26.050288  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:29.052601  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:32.053185  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:35.054173  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:38.056589  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:41.056901  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:44.057410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:47.058856  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:50.059838  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:53.061223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:56.061941  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:59.064269  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:02.065654  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:05.066720  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:08.069008  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:11.070247  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:14.071588  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:17.073030  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:20.075194  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:23.075889  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:26.077261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:29.079216  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:32.080240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:35.080740  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:38.083067  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:41.083410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:44.084470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:47.085187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:50.087373  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:53.089182  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:56.090200  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:59.091003  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:02.092270  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:05.093183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:08.094399  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:11.094584  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:11.094618  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:14:11.094699  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.112633  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.112730  767194 machine.go:96] duration metric: took 3m0.1164066s to provisionDockerMachine
	I0917 00:14:11.112808  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:11.112848  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.131340  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.131459  767194 retry.go:31] will retry after 217.33373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.349947  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.367764  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.367886  767194 retry.go:31] will retry after 328.999453ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.697508  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.715227  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.715392  767194 retry.go:31] will retry after 827.670309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.544130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.562142  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:12.562261  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:12.562274  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.562322  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:14:12.562353  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.581698  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.581803  767194 retry.go:31] will retry after 257.155823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.839282  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.856512  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.856617  767194 retry.go:31] will retry after 258.093075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.115042  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.133383  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.133525  767194 retry.go:31] will retry after 435.275696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.569043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.587245  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.587350  767194 retry.go:31] will retry after 560.286621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.148585  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:14.167049  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:14.167159  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.167179  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.167190  767194 fix.go:56] duration metric: took 3m3.488169176s for fixHost
	I0917 00:14:14.167197  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m3.488205367s
	W0917 00:14:14.167315  767194 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-198834" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p ha-198834" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.169966  767194 out.go:203] 
	W0917 00:14:14.171309  767194 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.171324  767194 out.go:285] * 
	* 
	W0917 00:14:14.173015  767194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:14:14.174398  767194 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-198834 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 node list --alsologtostderr -v 5
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-198834	192.168.49.2
ha-198834-m02	192.168.49.3
ha-198834-m03	192.168.49.4
ha-198834-m04	

                                                
                                                
After restart: ha-198834	192.168.49.2
ha-198834-m02	192.168.49.3
ha-198834-m03	192.168.49.4
ha-198834-m04	192.168.49.5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 767393,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:06:25.645261111Z",
	            "FinishedAt": "2025-09-17T00:06:25.028586858Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c4867649ec3bf0587f9374f9f6dd9a46e1de12efb67420295d89335c703f889",
	            "SandboxKey": "/var/run/docker/netns/4c4867649ec3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:18:73:7c:dc:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "72aabe87a74799f11ad2c9fa1888331ed148259ce868576244b9fb8348ce4fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.251110348s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt                                                           │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ node    │ ha-198834 node stop m02 --alsologtostderr -v 5                                                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ node    │ ha-198834 node start m02 --alsologtostderr -v 5                                                                                    │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:05 UTC │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │                     │
	│ stop    │ ha-198834 stop --alsologtostderr -v 5                                                                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │ 17 Sep 25 00:06 UTC │
	│ start   │ ha-198834 start --wait true --alsologtostderr -v 5                                                                                 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:06 UTC │                     │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:06:25
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:06:25.424279  767194 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:06:25.424573  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424581  767194 out.go:374] Setting ErrFile to fd 2...
	I0917 00:06:25.424586  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424775  767194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:06:25.425286  767194 out.go:368] Setting JSON to false
	I0917 00:06:25.426324  767194 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10118,"bootTime":1758057468,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:06:25.426427  767194 start.go:140] virtualization: kvm guest
	I0917 00:06:25.428578  767194 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:06:25.430211  767194 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:06:25.430246  767194 notify.go:220] Checking for updates...
	I0917 00:06:25.432570  767194 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:06:25.433820  767194 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:25.435087  767194 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:06:25.436546  767194 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:06:25.437859  767194 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:06:25.439704  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:25.439894  767194 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:06:25.464302  767194 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:06:25.464438  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.516697  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.50681521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.516812  767194 docker.go:318] overlay module found
	I0917 00:06:25.518746  767194 out.go:179] * Using the docker driver based on existing profile
	I0917 00:06:25.519979  767194 start.go:304] selected driver: docker
	I0917 00:06:25.519997  767194 start.go:918] validating driver "docker" against &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.520122  767194 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:06:25.520208  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.572516  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.563271649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.573652  767194 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:06:25.573697  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:25.573785  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:25.573870  767194 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.576437  767194 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0917 00:06:25.577616  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:25.578818  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:25.579785  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:25.579821  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:25.579826  767194 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:06:25.579871  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:25.579979  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:25.579993  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:25.580143  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.599791  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:25.599812  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:25.599832  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:25.599862  767194 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:25.599948  767194 start.go:364] duration metric: took 62.805µs to acquireMachinesLock for "ha-198834"
	I0917 00:06:25.599973  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:25.599982  767194 fix.go:54] fixHost starting: 
	I0917 00:06:25.600220  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.616766  767194 fix.go:112] recreateIfNeeded on ha-198834: state=Stopped err=<nil>
	W0917 00:06:25.616794  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:25.618968  767194 out.go:252] * Restarting existing docker container for "ha-198834" ...
	I0917 00:06:25.619043  767194 cli_runner.go:164] Run: docker start ha-198834
	I0917 00:06:25.855847  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.873957  767194 kic.go:430] container "ha-198834" state is running.
	I0917 00:06:25.874450  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:25.892189  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.892415  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:25.892480  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:25.912009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:25.912263  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:25.912277  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:25.912887  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59988->127.0.0.1:32813: read: connection reset by peer
	I0917 00:06:29.050047  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.050078  767194 ubuntu.go:182] provisioning hostname "ha-198834"
	I0917 00:06:29.050148  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.067712  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.067965  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.067980  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0917 00:06:29.215970  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.216043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.234106  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.234329  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.234345  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:29.370392  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:29.370431  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:29.370460  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:29.370469  767194 provision.go:84] configureAuth start
	I0917 00:06:29.370526  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:29.387543  767194 provision.go:143] copyHostCerts
	I0917 00:06:29.387579  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387610  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:29.387629  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387709  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:29.387817  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387848  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:29.387857  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387927  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:29.388004  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388027  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:29.388036  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388076  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:29.388269  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0917 00:06:29.680052  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:29.680112  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:29.680162  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.697396  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:29.794745  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:29.794807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:06:29.818846  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:29.818935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:29.843109  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:29.843177  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:06:29.867681  767194 provision.go:87] duration metric: took 497.192274ms to configureAuth
	I0917 00:06:29.867713  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:29.867938  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:29.867986  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.885190  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.885426  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.885443  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:30.020557  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:30.020583  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:30.020695  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:30.020755  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.038274  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.038492  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.038556  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:30.187120  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:30.187195  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.205293  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.205508  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.205531  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:30.346335  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:30.346367  767194 machine.go:96] duration metric: took 4.453936173s to provisionDockerMachine
	I0917 00:06:30.346383  767194 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0917 00:06:30.346398  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:30.346454  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:30.346492  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.363443  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.460028  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:30.463596  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:30.463625  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:30.463633  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:30.463639  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:30.463650  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:30.463700  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:30.463783  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:30.463796  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:30.463882  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:30.472864  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:30.497731  767194 start.go:296] duration metric: took 151.329262ms for postStartSetup
	I0917 00:06:30.497818  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:30.497853  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.515030  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.607057  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:30.611598  767194 fix.go:56] duration metric: took 5.011609188s for fixHost
	I0917 00:06:30.611632  767194 start.go:83] releasing machines lock for "ha-198834", held for 5.011665153s
	I0917 00:06:30.611691  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:30.629667  767194 ssh_runner.go:195] Run: cat /version.json
	I0917 00:06:30.629691  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:30.629719  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.629746  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.648073  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.648707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.812105  767194 ssh_runner.go:195] Run: systemctl --version
	I0917 00:06:30.816966  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:30.821509  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:30.840562  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:30.840635  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:30.850098  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:30.850133  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:30.850174  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:30.850289  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:30.867420  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:30.877948  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:30.888651  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:30.888731  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:30.899002  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.909052  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:30.918885  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.928779  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:30.938579  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:30.949499  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:30.960372  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:30.971253  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:30.980460  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:30.989781  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.059433  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:31.134046  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:31.134104  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:31.134189  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:31.147025  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.158451  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:31.177473  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.189232  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:31.201624  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:31.218917  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:31.222505  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:31.231136  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:31.249756  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:31.318828  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:31.386194  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:31.386293  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:31.405146  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:31.416620  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.483436  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:06:32.289053  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:06:32.300858  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:06:32.312042  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:06:32.323965  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.335721  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:06:32.399500  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:06:32.463504  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.532114  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:06:32.554184  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:06:32.565656  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.632393  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:06:32.706727  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.718700  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:06:32.718779  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:06:32.722502  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:06:32.722558  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:06:32.725864  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:06:32.759463  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:06:32.759531  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.784419  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.811577  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:06:32.811654  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:06:32.828274  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:06:32.832384  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:32.844198  767194 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:06:32.844338  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:32.844391  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.866962  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.866988  767194 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:06:32.867045  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.888238  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.888260  767194 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:06:32.888271  767194 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0917 00:06:32.888408  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:06:32.888467  767194 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:06:32.937957  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:32.937987  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:32.937999  767194 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:06:32.938023  767194 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:06:32.938138  767194 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:06:32.938157  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:06:32.938196  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:06:32.951493  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:32.951590  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:06:32.951639  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:06:32.960559  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:06:32.960633  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:06:32.969398  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0917 00:06:32.986997  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:06:33.005302  767194 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0917 00:06:33.023722  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:06:33.042510  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:06:33.046353  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:33.057738  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:33.121569  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:06:33.146613  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0917 00:06:33.146635  767194 certs.go:194] generating shared ca certs ...
	I0917 00:06:33.146655  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.146819  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:06:33.146861  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:06:33.146872  767194 certs.go:256] generating profile certs ...
	I0917 00:06:33.147007  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:06:33.147039  767194 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731
	I0917 00:06:33.147053  767194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:06:33.244684  767194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 ...
	I0917 00:06:33.244725  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731: {Name:mkeb1335a8dc05724d212e3f3c2f54f358e1623c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.244951  767194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 ...
	I0917 00:06:33.244976  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731: {Name:mkb539de1a460dc24807c303f56b400b0045d38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.245116  767194 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0917 00:06:33.245304  767194 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0917 00:06:33.245488  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:06:33.245509  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:06:33.245530  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:06:33.245548  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:06:33.245569  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:06:33.245589  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:06:33.245603  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:06:33.245616  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:06:33.245631  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:06:33.245698  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:06:33.245742  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:06:33.245759  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:06:33.245789  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:06:33.245819  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:06:33.245852  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:06:33.245931  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:33.245973  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.246001  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.246019  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.246713  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:06:33.280935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:06:33.310873  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:06:33.335758  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:06:33.364379  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:06:33.390832  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:06:33.415955  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:06:33.440057  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:06:33.463203  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:06:33.486818  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:06:33.510617  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:06:33.534829  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:06:33.553186  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:06:33.558602  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:06:33.568556  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572286  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572354  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.579085  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:06:33.588476  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:06:33.598074  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601602  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601665  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.608370  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:06:33.617493  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:06:33.626827  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630358  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630412  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.637101  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:06:33.645992  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:06:33.649484  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:06:33.657172  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:06:33.664432  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:06:33.673579  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:06:33.681621  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:06:33.690060  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:06:33.697708  767194 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:33.697865  767194 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:06:33.723793  767194 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:06:33.738005  767194 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:06:33.738035  767194 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:06:33.738100  767194 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:06:33.751261  767194 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:33.751774  767194 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-198834" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.751968  767194 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-661878/kubeconfig needs updating (will repair): [kubeconfig missing "ha-198834" cluster setting kubeconfig missing "ha-198834" context setting]
	I0917 00:06:33.752337  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.752804  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:06:33.753302  767194 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:06:33.753319  767194 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:06:33.753323  767194 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:06:33.753327  767194 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:06:33.753332  767194 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:06:33.753384  767194 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:06:33.753793  767194 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:06:33.766494  767194 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:06:33.766524  767194 kubeadm.go:593] duration metric: took 28.480766ms to restartPrimaryControlPlane
	I0917 00:06:33.766536  767194 kubeadm.go:394] duration metric: took 68.837067ms to StartCluster
	I0917 00:06:33.766560  767194 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.766635  767194 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.767596  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.767874  767194 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:06:33.767916  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:06:33.767929  767194 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:06:33.768219  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.771075  767194 out.go:179] * Enabled addons: 
	I0917 00:06:33.772321  767194 addons.go:514] duration metric: took 4.387344ms for enable addons: enabled=[]
	I0917 00:06:33.772363  767194 start.go:246] waiting for cluster config update ...
	I0917 00:06:33.772375  767194 start.go:255] writing updated cluster config ...
	I0917 00:06:33.774041  767194 out.go:203] 
	I0917 00:06:33.775488  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.775605  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.777754  767194 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0917 00:06:33.779232  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:33.780466  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:33.781663  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:33.781696  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:33.781785  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:33.781814  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:33.781827  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:33.782011  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.808184  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:33.808211  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:33.808230  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:33.808264  767194 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:33.808324  767194 start.go:364] duration metric: took 41.8µs to acquireMachinesLock for "ha-198834-m02"
	I0917 00:06:33.808349  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:33.808357  767194 fix.go:54] fixHost starting: m02
	I0917 00:06:33.808657  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:33.830576  767194 fix.go:112] recreateIfNeeded on ha-198834-m02: state=Stopped err=<nil>
	W0917 00:06:33.830617  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:33.832420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m02" ...
	I0917 00:06:33.832507  767194 cli_runner.go:164] Run: docker start ha-198834-m02
	I0917 00:06:34.153635  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:34.174085  767194 kic.go:430] container "ha-198834-m02" state is running.
	I0917 00:06:34.174485  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:34.193433  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:34.193710  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:34.193778  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:34.214780  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:34.215097  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:34.215113  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:34.215694  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38290->127.0.0.1:32818: read: connection reset by peer
	I0917 00:06:37.354066  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.354095  767194 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0917 00:06:37.354152  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.371082  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.371306  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.371320  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0917 00:06:37.519883  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.519999  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.537320  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.537534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.537550  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:37.672583  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:37.672613  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:37.672631  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:37.672648  767194 provision.go:84] configureAuth start
	I0917 00:06:37.672696  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:37.689646  767194 provision.go:143] copyHostCerts
	I0917 00:06:37.689686  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689726  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:37.689739  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689816  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:37.689949  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.689980  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:37.689988  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.690037  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:37.690112  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690144  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:37.690151  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690194  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:37.690275  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0917 00:06:37.816978  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:37.817061  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:37.817110  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.833876  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:37.931727  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:37.931807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:37.957434  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:37.957498  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:06:37.982656  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:37.982715  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:06:38.008383  767194 provision.go:87] duration metric: took 335.719749ms to configureAuth
	I0917 00:06:38.008424  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:38.008674  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:38.008734  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.025557  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.025785  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.025797  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:38.163170  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:38.163196  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:38.163371  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:38.163449  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.185210  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.185534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.185648  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:38.356034  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:38.356160  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.375350  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.375668  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.375699  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:50.199822  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-17 00:04:28.867992287 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-17 00:06:38.349897889 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 00:06:50.199856  767194 machine.go:96] duration metric: took 16.006130584s to provisionDockerMachine
	I0917 00:06:50.199874  767194 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0917 00:06:50.199898  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:50.199991  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:50.200037  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.231846  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.352925  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:50.364867  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:50.365109  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:50.365165  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:50.365182  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:50.365203  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:50.365613  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:50.365774  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:50.365791  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:50.366045  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:50.388970  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:50.439295  767194 start.go:296] duration metric: took 239.401963ms for postStartSetup
	I0917 00:06:50.439403  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:50.439460  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.471007  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.602680  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:50.622483  767194 fix.go:56] duration metric: took 16.814116597s for fixHost
	I0917 00:06:50.622519  767194 start.go:83] releasing machines lock for "ha-198834-m02", held for 16.814180436s
	I0917 00:06:50.622611  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:50.653586  767194 out.go:179] * Found network options:
	I0917 00:06:50.656159  767194 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:06:50.657611  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:06:50.657663  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:06:50.657748  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:50.657820  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.658056  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:50.658130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.695981  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.696302  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.813556  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:50.945454  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:50.945549  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:50.963173  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:50.963207  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:50.963244  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:50.963393  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:51.026654  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:51.062543  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:51.084179  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:51.084245  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:51.116429  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.134652  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:51.149737  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.178368  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:51.192765  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:51.210476  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:51.239805  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:51.263323  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:51.278110  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:51.292395  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:51.494387  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:51.834314  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:51.834371  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:51.834425  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:51.865409  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.888868  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:51.925439  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.950993  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:51.977155  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:52.018179  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:52.023424  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:52.036424  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:52.064244  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:52.246651  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:52.441476  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:52.441527  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:52.483989  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:52.501544  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:52.690125  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:09.204303  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.51413344s)
	I0917 00:07:09.204382  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:09.225679  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:09.253125  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:09.286728  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:09.309012  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:09.445797  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:09.588443  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.726437  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:09.759063  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:09.787528  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.918052  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:10.070248  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:10.091720  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:10.091835  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:10.104106  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:10.104210  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:10.109447  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:10.164469  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:10.164546  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.206116  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.251181  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:10.252538  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:10.254028  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:10.280282  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:10.286408  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:10.315665  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:10.317007  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:10.317340  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:10.349566  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:10.349878  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0917 00:07:10.349892  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:10.349931  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:10.350083  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:10.350139  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:10.350152  767194 certs.go:256] generating profile certs ...
	I0917 00:07:10.350273  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:10.350356  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.11b60fbb
	I0917 00:07:10.350412  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:10.350424  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:10.350443  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:10.350459  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:10.350474  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:10.350489  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:10.350505  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:10.350519  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:10.350532  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:10.350613  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:10.350656  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:10.350669  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:10.350702  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:10.350734  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:10.350774  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:10.350834  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:10.350874  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:10.350896  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:10.350924  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:10.350992  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:10.376726  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:10.493359  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:10.503886  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:10.534629  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:10.546504  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:10.568315  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:10.575486  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:10.605107  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:10.617021  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:10.651278  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:10.670568  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:10.696371  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:10.704200  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:10.732773  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:10.783862  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:10.831455  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:10.878503  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:10.928036  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:10.987893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:11.056094  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:11.123465  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:11.173229  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:11.218880  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:11.260399  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:11.310489  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:11.343030  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:11.378463  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:11.409826  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:11.456579  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:11.506523  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:11.540827  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:11.586318  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:11.600141  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:11.619035  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.625867  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.626054  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.639263  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:11.653785  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:11.672133  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681092  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681171  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.692463  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:11.707982  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:11.728502  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735225  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735287  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.745817  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:11.762496  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:11.768239  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:11.782100  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:11.796792  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:11.807595  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:11.818618  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:11.828824  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:11.839591  767194 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0917 00:07:11.839780  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:11.839824  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:11.839873  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:11.860859  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:11.861012  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:11.861079  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:11.879762  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:11.879865  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:11.896560  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:11.928442  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:11.958532  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:11.988805  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:11.997336  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:12.017582  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.177262  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.199102  767194 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:12.199621  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:12.202718  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:12.204066  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.356191  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.380335  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:12.380472  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:12.380985  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184442  767194 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0917 00:07:13.184486  767194 node_ready.go:38] duration metric: took 803.457553ms for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184510  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:13.184576  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:13.200500  767194 api_server.go:72] duration metric: took 1.001333458s to wait for apiserver process to appear ...
	I0917 00:07:13.200532  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:07:13.200555  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:07:13.213606  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:07:13.214727  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:07:13.214764  767194 api_server.go:131] duration metric: took 14.223116ms to wait for apiserver health ...
	I0917 00:07:13.214777  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:07:13.256193  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:07:13.256242  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256252  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256264  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.256270  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.256275  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.256280  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.256284  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.256289  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.256293  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.256298  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.256303  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.256308  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.256313  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.256318  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.256322  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.256327  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.256333  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.256338  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.256343  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.256347  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.256354  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:07:13.256358  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.256363  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.256369  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.256384  767194 system_pods.go:74] duration metric: took 41.59977ms to wait for pod list to return data ...
	I0917 00:07:13.256395  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:07:13.264291  767194 default_sa.go:45] found service account: "default"
	I0917 00:07:13.264324  767194 default_sa.go:55] duration metric: took 7.92079ms for default service account to be created ...
	I0917 00:07:13.264336  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:07:13.276453  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:07:13.276550  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276578  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276615  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.276644  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.276660  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.276676  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.276691  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.276720  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.276746  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.276763  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.276778  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.276793  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.276822  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.276857  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.276872  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.276885  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.277012  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.277120  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.277142  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.277175  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.277203  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:07:13.277208  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.277217  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.277225  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.277236  767194 system_pods.go:126] duration metric: took 12.891282ms to wait for k8s-apps to be running ...
	I0917 00:07:13.277249  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:07:13.277375  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:07:13.297819  767194 system_svc.go:56] duration metric: took 20.558975ms WaitForService to wait for kubelet
	I0917 00:07:13.297852  767194 kubeadm.go:578] duration metric: took 1.098690951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:07:13.297875  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:07:13.307482  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307521  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307539  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307677  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307701  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307723  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307753  767194 node_conditions.go:105] duration metric: took 9.872298ms to run NodePressure ...
	I0917 00:07:13.307786  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:07:13.307825  767194 start.go:255] writing updated cluster config ...
	I0917 00:07:13.310261  767194 out.go:203] 
	I0917 00:07:13.313110  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:13.313320  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.314968  767194 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0917 00:07:13.316602  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:07:13.318003  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:07:13.319806  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:07:13.319840  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:07:13.319992  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:07:13.320012  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:07:13.320251  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.320825  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:07:13.347419  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:07:13.347438  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:07:13.347454  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:07:13.347496  767194 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:07:13.347565  767194 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "ha-198834-m03"
	I0917 00:07:13.347590  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:07:13.347599  767194 fix.go:54] fixHost starting: m03
	I0917 00:07:13.347818  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.369824  767194 fix.go:112] recreateIfNeeded on ha-198834-m03: state=Stopped err=<nil>
	W0917 00:07:13.369863  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:07:13.371420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m03" ...
	I0917 00:07:13.371502  767194 cli_runner.go:164] Run: docker start ha-198834-m03
	I0917 00:07:13.655593  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.677720  767194 kic.go:430] container "ha-198834-m03" state is running.
	I0917 00:07:13.678397  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:13.698869  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.699223  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:07:13.699297  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:13.720009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:13.720402  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:13.720423  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:07:13.721130  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37410->127.0.0.1:32823: read: connection reset by peer
	I0917 00:07:16.888288  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:16.888424  767194 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0917 00:07:16.888511  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:16.916245  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:16.916715  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:16.916774  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0917 00:07:17.072762  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:17.072849  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.090683  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.090891  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.090926  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:07:17.226615  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.226655  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:07:17.226674  767194 ubuntu.go:190] setting up certificates
	I0917 00:07:17.226686  767194 provision.go:84] configureAuth start
	I0917 00:07:17.226737  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:17.243928  767194 provision.go:143] copyHostCerts
	I0917 00:07:17.243981  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244016  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:07:17.244028  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244117  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:07:17.244225  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244251  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:07:17.244261  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244308  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:07:17.244380  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244407  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:07:17.244416  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244453  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:07:17.244535  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0917 00:07:17.292018  767194 provision.go:177] copyRemoteCerts
	I0917 00:07:17.292080  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:07:17.292117  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.308563  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:17.405828  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:07:17.405893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:07:17.431262  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:07:17.431334  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:07:17.455746  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:07:17.455816  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:07:17.480475  767194 provision.go:87] duration metric: took 253.772124ms to configureAuth
	I0917 00:07:17.480509  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:07:17.480714  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:17.480758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.497376  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.497580  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.497596  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:07:17.633636  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:07:17.633662  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:07:17.633805  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:07:17.633874  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.651414  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.651681  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.651795  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:07:17.804026  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:07:17.804120  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.820842  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.821111  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.821138  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:07:17.969667  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.969696  767194 machine.go:96] duration metric: took 4.270454946s to provisionDockerMachine
	I0917 00:07:17.969711  767194 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0917 00:07:17.969724  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:07:17.969792  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:07:17.969841  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.990397  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.094261  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:07:18.098350  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:07:18.098388  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:07:18.098399  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:07:18.098407  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:07:18.098437  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:07:18.098499  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:07:18.098595  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:07:18.098610  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:07:18.098725  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:07:18.109219  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:18.136620  767194 start.go:296] duration metric: took 166.894782ms for postStartSetup
	I0917 00:07:18.136712  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:07:18.136758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.154707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.253452  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:07:18.258750  767194 fix.go:56] duration metric: took 4.91114427s for fixHost
	I0917 00:07:18.258774  767194 start.go:83] releasing machines lock for "ha-198834-m03", held for 4.911195885s
	I0917 00:07:18.258832  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:18.277160  767194 out.go:179] * Found network options:
	I0917 00:07:18.278351  767194 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:07:18.279348  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279378  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279406  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279425  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:07:18.279508  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:07:18.279557  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.279572  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:07:18.279629  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.297009  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.297357  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.461356  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:07:18.481814  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:07:18.481895  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:07:18.491087  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:07:18.491123  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.491159  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.491286  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:18.508046  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:07:18.518506  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:07:18.528724  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:07:18.528783  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:07:18.538901  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.548523  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:07:18.558495  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.568810  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:07:18.578635  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:07:18.588831  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:07:18.599026  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:07:18.608953  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:07:18.617676  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:07:18.629264  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:18.768747  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:07:18.967427  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.967485  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.967537  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:07:18.989293  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.005620  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:07:19.028890  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.040741  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:07:19.052468  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:19.069901  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:07:19.074018  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:07:19.084197  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:07:19.103723  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:07:19.235291  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:07:19.383050  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:07:19.383098  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:07:19.407054  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:07:19.421996  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:19.555630  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:50.623187  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.067490574s)
	I0917 00:07:50.623303  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:50.641030  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:50.658671  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:50.689413  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:50.703046  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:50.803170  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:50.901724  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:50.993561  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:51.017479  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:51.029545  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:51.119869  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:51.204520  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:51.216519  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:51.216591  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:51.220572  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:51.220624  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:51.224162  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:51.260602  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:51.260663  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.285759  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.312885  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:51.314109  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:51.315183  767194 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:07:51.316372  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:51.333621  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:51.337646  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:51.349463  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:51.349718  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:51.350027  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:51.366938  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:51.367221  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0917 00:07:51.367234  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:51.367257  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:51.367403  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:51.367473  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:51.367486  767194 certs.go:256] generating profile certs ...
	I0917 00:07:51.367595  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:51.367661  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0917 00:07:51.367716  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:51.367732  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:51.367752  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:51.367770  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:51.367789  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:51.367807  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:51.367832  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:51.367852  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:51.367869  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:51.367977  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:51.368020  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:51.368035  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:51.368076  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:51.368123  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:51.368156  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:51.368219  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:51.368269  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:51.368293  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:51.368313  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:51.368380  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:51.385113  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:51.473207  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:51.477858  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:51.490558  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:51.494138  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:51.507164  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:51.510845  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:51.523649  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:51.527311  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:51.539889  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:51.543488  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:51.557348  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:51.561022  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:51.575140  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:51.600746  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:51.626754  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:51.652660  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:51.677825  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:51.705137  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:51.740575  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:51.782394  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:51.821612  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:51.869185  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:51.909129  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:51.951856  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:51.980155  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:52.009170  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:52.038558  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:52.065379  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:52.093597  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:52.126589  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:52.157625  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:52.165683  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:52.182691  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188710  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188782  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.198794  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:52.213539  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:52.228292  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233558  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233622  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.242917  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:52.253428  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:52.264188  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268190  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268248  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.275453  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:52.285681  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:52.289640  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:52.297959  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:52.305434  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:52.313682  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:52.322656  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:52.330627  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:52.338015  767194 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0917 00:07:52.338141  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:52.338171  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:52.338230  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:52.353235  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:52.353321  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:52.353383  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:52.364085  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:52.364180  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:52.374489  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:52.394684  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:52.414928  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:52.435081  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:52.439302  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:52.451073  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.596707  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.610374  767194 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:52.610770  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:52.613091  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:52.614497  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.748599  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.767051  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:52.767139  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:52.767427  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771001  767194 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0917 00:07:52.771035  767194 node_ready.go:38] duration metric: took 3.579349ms for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771053  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:52.771108  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.272115  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.771243  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.271592  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.772153  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.272098  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.771893  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.271870  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.771931  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.271565  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.771663  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.272256  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.772138  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.272247  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.772002  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.271313  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.771538  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.272173  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.287212  767194 api_server.go:72] duration metric: took 8.676772616s to wait for apiserver process to appear ...
	I0917 00:08:01.287241  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:08:01.287263  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:08:01.291600  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:08:01.292548  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:08:01.292573  767194 api_server.go:131] duration metric: took 5.323927ms to wait for apiserver health ...
	I0917 00:08:01.292583  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:08:01.299296  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:08:01.299329  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.299337  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.299343  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.299349  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.299354  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.299360  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.299374  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.299383  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.299391  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.299396  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.299405  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.299410  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.299417  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.299426  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.299434  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.299440  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299452  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299462  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.299474  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.299483  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.299488  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.299495  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.299500  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.299507  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.299515  767194 system_pods.go:74] duration metric: took 6.92458ms to wait for pod list to return data ...
	I0917 00:08:01.299527  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:08:01.302268  767194 default_sa.go:45] found service account: "default"
	I0917 00:08:01.302290  767194 default_sa.go:55] duration metric: took 2.753628ms for default service account to be created ...
	I0917 00:08:01.302298  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:08:01.308262  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:08:01.308290  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.308297  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.308303  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.308308  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.308313  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.308318  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.308328  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.308338  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.308345  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.308353  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.308358  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.308366  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.308372  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.308382  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.308387  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.308399  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308406  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308416  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.308422  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.308430  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.308437  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.308444  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.308450  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.308457  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.308466  767194 system_pods.go:126] duration metric: took 6.162144ms to wait for k8s-apps to be running ...
	I0917 00:08:01.308477  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:08:01.308531  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:08:01.321442  767194 system_svc.go:56] duration metric: took 12.955822ms WaitForService to wait for kubelet
	I0917 00:08:01.321471  767194 kubeadm.go:578] duration metric: took 8.711043606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:08:01.321497  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:08:01.324862  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324889  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324932  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324940  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324955  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324965  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324975  767194 node_conditions.go:105] duration metric: took 3.472737ms to run NodePressure ...
	I0917 00:08:01.324991  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:08:01.325019  767194 start.go:255] writing updated cluster config ...
	I0917 00:08:01.327247  767194 out.go:203] 
	I0917 00:08:01.328726  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:08:01.328814  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.330445  767194 out.go:179] * Starting "ha-198834-m04" worker node in "ha-198834" cluster
	I0917 00:08:01.331747  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:08:01.333143  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:08:01.334280  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:08:01.334304  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:08:01.334314  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:08:01.334421  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:08:01.334508  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:08:01.334619  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.354767  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:08:01.354793  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:08:01.354813  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:08:01.354846  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:08:01.354978  767194 start.go:364] duration metric: took 110.48µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:08:01.355008  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:08:01.355019  767194 fix.go:54] fixHost starting: m04
	I0917 00:08:01.355235  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.371130  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:08:01.371158  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:08:01.373077  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:08:01.373153  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:08:01.641002  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.659099  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:08:01.659469  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:08:01.678005  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.678237  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:08:01.678290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:08:01.696742  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:08:01.697129  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0917 00:08:01.697150  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:08:01.697961  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37464->127.0.0.1:32828: read: connection reset by peer
	I0917 00:08:04.699300  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:07.701796  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:10.702633  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:13.704979  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:16.706261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:19.708223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:22.709325  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:25.709823  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:28.712117  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:31.713282  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:34.713692  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:37.714198  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:40.714526  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:43.715144  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:46.716332  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:49.718233  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:52.719842  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:55.720892  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:58.723145  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:01.724306  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:04.725156  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:07.727215  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:10.727548  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:13.729824  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:16.730195  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:19.732187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:22.733240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:25.734470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:28.736754  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:31.737738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:34.738212  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:37.740201  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:40.740629  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:43.742209  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:46.743230  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:49.743812  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:52.745547  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:55.746133  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:58.747347  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:01.748104  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:04.749384  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:07.751199  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:10.751605  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:13.754005  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:16.755405  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:19.757166  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:22.759220  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:25.760523  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:28.762825  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:31.764155  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:34.765318  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:37.767696  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:40.768111  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:43.768686  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:46.769636  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:49.771919  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:52.774246  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:55.774600  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:58.776146  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:11:01.777005  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:11:01.777043  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:11:01.777121  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.795827  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.795926  767194 machine.go:96] duration metric: took 3m0.117674387s to provisionDockerMachine
	I0917 00:11:01.796029  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:01.796065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.813326  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.813470  767194 retry.go:31] will retry after 152.729446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:01.966929  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.985775  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.985883  767194 retry.go:31] will retry after 397.218731ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:02.383496  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:02.403581  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:02.403703  767194 retry.go:31] will retry after 638.635672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.042529  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.059560  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.059686  767194 retry.go:31] will retry after 704.769086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.765290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.783784  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:03.783946  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:03.783981  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.784042  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:11:03.784097  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.801467  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.801578  767194 retry.go:31] will retry after 205.36367ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.008065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.026061  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.026199  767194 retry.go:31] will retry after 386.510214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.413871  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.432422  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.432542  767194 retry.go:31] will retry after 536.785381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.970143  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.987140  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.987259  767194 retry.go:31] will retry after 666.945417ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.654998  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:05.677613  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:05.677742  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677760  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677774  767194 fix.go:56] duration metric: took 3m4.322754949s for fixHost
	I0917 00:11:05.677787  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m4.322792335s
	W0917 00:11:05.677805  767194 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677949  767194 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677962  767194 start.go:729] Will try again in 5 seconds ...
	I0917 00:11:10.678811  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:11:10.678978  767194 start.go:364] duration metric: took 125.961µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:11:10.679012  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:11:10.679023  767194 fix.go:54] fixHost starting: m04
	I0917 00:11:10.679331  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.696334  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:11:10.696364  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:11:10.698674  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:11:10.698775  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:11:10.958441  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.976858  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:11:10.977249  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:11:10.996019  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:11:10.996308  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:11:10.996391  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:11:11.014622  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:11:11.014851  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0917 00:11:11.014862  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:11:11.015528  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40006->127.0.0.1:32833: read: connection reset by peer
	I0917 00:11:14.016664  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:17.018409  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:20.020719  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:23.023197  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:26.024253  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:29.026231  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:32.027234  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:35.028559  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:38.030180  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:41.030858  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:44.031976  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:47.032386  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:50.034183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:53.036585  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:56.037322  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:59.039174  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:02.040643  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:05.042141  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:08.044484  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:11.044866  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:14.045168  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:17.046169  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:20.047738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:23.049217  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:26.050288  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:29.052601  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:32.053185  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:35.054173  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:38.056589  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:41.056901  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:44.057410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:47.058856  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:50.059838  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:53.061223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:56.061941  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:59.064269  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:02.065654  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:05.066720  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:08.069008  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:11.070247  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:14.071588  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:17.073030  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:20.075194  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:23.075889  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:26.077261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:29.079216  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:32.080240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:35.080740  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:38.083067  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:41.083410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:44.084470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:47.085187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:50.087373  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:53.089182  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:56.090200  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:59.091003  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:02.092270  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:05.093183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:08.094399  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:11.094584  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:11.094618  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:14:11.094699  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.112633  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.112730  767194 machine.go:96] duration metric: took 3m0.1164066s to provisionDockerMachine
	I0917 00:14:11.112808  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:11.112848  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.131340  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.131459  767194 retry.go:31] will retry after 217.33373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.349947  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.367764  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.367886  767194 retry.go:31] will retry after 328.999453ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.697508  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.715227  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.715392  767194 retry.go:31] will retry after 827.670309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.544130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.562142  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:12.562261  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:12.562274  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.562322  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:14:12.562353  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.581698  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.581803  767194 retry.go:31] will retry after 257.155823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.839282  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.856512  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.856617  767194 retry.go:31] will retry after 258.093075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.115042  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.133383  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.133525  767194 retry.go:31] will retry after 435.275696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.569043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.587245  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.587350  767194 retry.go:31] will retry after 560.286621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.148585  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:14.167049  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:14.167159  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.167179  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.167190  767194 fix.go:56] duration metric: took 3m3.488169176s for fixHost
	I0917 00:14:14.167197  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m3.488205367s
	W0917 00:14:14.167315  767194 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-198834" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.169966  767194 out.go:203] 
	W0917 00:14:14.171309  767194 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.171324  767194 out.go:285] * 
	W0917 00:14:14.173015  767194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:14:14.174398  767194 out.go:203] 
	
	
	==> Docker <==
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Setting cgroupDriver systemd"
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 17 00:06:32 ha-198834 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-pstjp_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b47695e7722ae97363ea22c63f66096a6ecc511747e54aac5f8ef52c2bccc43f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/beb17aaed35c336b100468a8af1e4d5a446acc16a51b6d88c169b26f731e4d18/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a13ea6d24610a4b3fe0f24eb6ae80782a60d62b4d2d9232966b5779cbab4b54/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af005efeb3a09eef7fbb97f4b29e8c0d2980e77ba4c7ceccc514d8de19a0c461/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2edf887287bbf8068cce63b7faf1f32074cd90688f7befba7a02a4cb8b00d85f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:34 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2\""
	Sep 17 00:06:34 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000\""
	Sep 17 00:06:39 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/02337f9cf4b1297217a71f717a99d7fd2b400649baf91af0fe3e64f2ae3bf34b/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6d2a2147c23d6db38977d2b195118845bcf0f4b7b50bd65e59156087c8f4a36/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec5acf265466354c265f4a5a6c47300c16e052d876e5b879f13c8cb25513d1df/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd69455479fe49678a69e6c15e7428cf2e0933a67e62ce21b42adc2ddffbbc50/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4da814f488dc35aa80427876bce77b335fc3a2333320170df1e542d7dbf76b68/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:06:40 ha-198834 dockerd[794]: time="2025-09-17T00:06:40.918358490Z" level=info msg="ignoring event" container=c593c83411d202af565aa578ee9c507fe6076579aab28504b4f9fc77eebb5e49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13e9e86bdcc31c2473895f9f8e326522c316dee735315cefa4058543e1714435/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:52 ha-198834 dockerd[794]: time="2025-09-17T00:06:52.494377563Z" level=info msg="ignoring event" container=1625a23fd7f91dfa311956f9315bcae7fdde0540127a12f56cf5429b147e1f07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	64ab62b23e778       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       2                   a6d2a2147c23d       storage-provisioner
	ab70c5e50e54c       765655ea60781                                                                                         7 minutes ago       Running             kube-vip                  1                   2edf887287bbf       kube-vip-ha-198834
	bdc52003487f9       409467f978b4a                                                                                         7 minutes ago       Running             kindnet-cni               1                   13e9e86bdcc31       kindnet-h28vp
	c593c83411d20       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       1                   a6d2a2147c23d       storage-provisioner
	d130ec085d5ce       8c811b4aec35f                                                                                         7 minutes ago       Running             busybox                   1                   4da814f488dc3       busybox-7b57f96db7-pstjp
	19c8584dae1b9       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   3                   fd69455479fe4       coredns-66bc5c9577-5wx4k
	21dff06737d90       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                1                   02337f9cf4b12       kube-proxy-5tkhn
	8a501078c4170       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   3                   ec5acf2654663       coredns-66bc5c9577-mjbz6
	9f5475377594b       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            1                   b47695e7722ae       kube-apiserver-ha-198834
	1625a23fd7f91       765655ea60781                                                                                         7 minutes ago       Exited              kube-vip                  0                   2edf887287bbf       kube-vip-ha-198834
	e5f91b76238c9       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   1                   af005efeb3a09       kube-controller-manager-ha-198834
	371ff065d1dfd       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            1                   7a13ea6d24610       kube-scheduler-ha-198834
	7b047b1099553       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      1                   beb17aaed35c3       etcd-ha-198834
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         16 minutes ago      Exited              coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         16 minutes ago      Exited              coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              16 minutes ago      Exited              kindnet-cni               0                   f541f878be896       kindnet-h28vp
	2da683f529549       df0860106674d                                                                                         16 minutes ago      Exited              kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	4f536df8f44eb       a0af72f2ec6d6                                                                                         17 minutes ago      Exited              kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         17 minutes ago      Exited              kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         17 minutes ago      Exited              etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         17 minutes ago      Exited              kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [19c8584dae1b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53538 - 29295 "HINFO IN 9023489977302481875.6206531949632663336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037239604s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [8a501078c417] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35492 - 21170 "HINFO IN 5429275037699935078.1019057475364754304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034969536s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4f7ea59034e] <==
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:14:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f3c2828aef94f11bd80d984a3eb304b
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m29s                  kube-proxy       
	  Normal  Starting                 16m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    16m                    kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                    kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                    kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           8m42s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  NodeHasSufficientMemory  7m42s (x8 over 7m42s)  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    7m42s (x8 over 7m42s)  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x7 over 7m42s)  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m33s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m55s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m42s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m48s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:14:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:25 +0000   Wed, 17 Sep 2025 00:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d9336414c044e558d42395caacb8496
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 16m                    kube-proxy       
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  NodeHasSufficientPID     9m50s (x7 over 9m50s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m50s (x8 over 9m50s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m50s (x8 over 9m50s)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m50s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m42s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  Starting                 7m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m40s (x8 over 7m40s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x8 over 7m40s)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x7 over 7m40s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m33s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           6m55s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           6m42s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           5m48s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	Name:               ha-198834-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_58_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:14:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:11:31 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:11:31 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:11:31 +0000   Tue, 16 Sep 2025 23:58:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:11:31 +0000   Tue, 16 Sep 2025 23:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-198834-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c480137ef4e4a65bfca4c75801b75e8
	  System UUID:                6f810798-3461-44d1-91c3-d55b483ec842
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l2jn5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-198834-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-67fn9                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-198834-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-198834-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-d8brp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-198834-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-198834-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m56s                kube-proxy       
	  Normal  RegisteredNode           15m                  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode           8m42s                node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode           7m33s                node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  Starting                 7m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m1s (x8 over 7m1s)  kubelet          Node ha-198834-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m1s (x8 over 7m1s)  kubelet          Node ha-198834-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m1s (x7 over 7m1s)  kubelet          Node ha-198834-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m55s                node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode           6m42s                node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	  Normal  RegisteredNode           5m48s                node-controller  Node ha-198834-m03 event: Registered Node ha-198834-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"warn","ts":"2025-09-17T00:06:21.816182Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816245Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:06:21.816263Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816182Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816283Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:06:21.816292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:06:21.816232Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:06:21.816324Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816342Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816364Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816409Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816435Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816472Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816689Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816711Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816726Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816752Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816950Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817063Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817099Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817120Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.819127Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:06:21.819183Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:06:21.819210Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:06:21.819240Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-198834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7b047b109955] <==
	{"level":"info","ts":"2025-09-17T00:07:15.595973Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-17T00:07:15.596020Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:15.596057Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:15.605137Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-17T00:07:15.605187Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:15.618393Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:15.618474Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:07:46.915551Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:07:46.916311Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:07:46.919021Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b3d041dbb5a11c89","error":"failed to dial b3d041dbb5a11c89 on stream MsgApp v2 (EOF)"}
	{"level":"warn","ts":"2025-09-17T00:07:47.083897Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:07:49.537070Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b3d041dbb5a11c89","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:07:49.537126Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b3d041dbb5a11c89","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:07:50.813041Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:07:53.538257Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b3d041dbb5a11c89","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:07:53.538316Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b3d041dbb5a11c89","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:07:57.539346Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b3d041dbb5a11c89","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:07:57.539397Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b3d041dbb5a11c89","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-09-17T00:07:58.904633Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-17T00:07:58.904684Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.904700Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.906936Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3d041dbb5a11c89","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-17T00:07:58.906971Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.923403Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.926618Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	
	
	==> kernel <==
	 00:14:15 up  2:56,  0 users,  load average: 0.28, 1.38, 1.51
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:05:30.418641       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.418896       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:40.419001       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:05:40.419203       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:40.419213       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.419325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:40.419337       1 main.go:301] handling current node
	I0917 00:05:50.419127       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:50.419157       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:50.419382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:50.419397       1 main.go:301] handling current node
	I0917 00:05:50.419409       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:50.419413       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:00.422562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:00.422596       1 main.go:301] handling current node
	I0917 00:06:00.422611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:06:00.422616       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:00.422807       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:06:00.422815       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:06:10.425320       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:10.425358       1 main.go:301] handling current node
	I0917 00:06:10.425375       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:06:10.425381       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:10.425598       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:06:10.425613       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [bdc52003487f] <==
	I0917 00:13:31.562855       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:41.571045       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:13:41.571082       1 main.go:301] handling current node
	I0917 00:13:41.571098       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:41.571105       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:13:41.571718       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:41.571924       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:51.563112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:13:51.563147       1 main.go:301] handling current node
	I0917 00:13:51.563166       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:51.563171       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:13:51.563440       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:51.563450       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:01.562311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:01.562353       1 main.go:301] handling current node
	I0917 00:14:01.562369       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:01.562373       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:01.562589       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:01.562603       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:11.571668       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:11.571702       1 main.go:301] handling current node
	I0917 00:14:11.571718       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:11.571723       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:11.571936       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:11.571959       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [9f5475377594] <==
	E0917 00:07:13.176354       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174810       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174819       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174828       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174837       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174846       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.175006       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.175025       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-09-17T00:07:13.177063Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0007dd680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-09-17T00:07:13.177068Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00115f680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	E0917 00:07:13.177483       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.177936       1 watcher.go:335] watch chan error: etcdserver: no leader
	I0917 00:07:14.364054       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	W0917 00:07:43.229272       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0917 00:07:46.104655       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:03.309841       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:14.885894       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:27.376078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:26.628008       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:30.857365       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:39.501415       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:56.232261       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:57.532285       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:02.515292       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:58.658174       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0917 00:06:21.818285       1 secure_serving.go:259] Stopped listening on [::]:8443
	I0917 00:06:21.818307       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:06:21.818343       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:06:21.818212       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 00:06:21.818343       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:06:21.818354       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0917 00:06:21.818404       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	E0917 00:06:21.818445       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:06:21.819652       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.966573ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-hxygcsz4tng6hmluvaoa4vlmha" result=null
	W0917 00:06:22.805712       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:06:22.862569       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-09-17T00:06:22.868061Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.868163       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.868276Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.869370Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0017f8960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	I0917 00:06:22.869404       1 cidrallocator.go:210] stopping ServiceCIDR Allocator Controller
	{"level":"warn","ts":"2025-09-17T00:06:22.869490Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125b680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.869797Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0018fc5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.870313Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ce1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.870382       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.870475Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ce1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.871365Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ed2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.871420       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.871506Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ed2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.875069Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002a014a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [e5f91b76238c] <==
	I0917 00:06:42.687854       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:06:42.687864       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:06:42.687933       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:06:42.688029       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:06:42.688074       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:06:42.688133       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:06:42.688192       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:06:42.688272       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:06:42.688535       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:06:42.688667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:06:42.689165       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	I0917 00:06:42.689227       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834"
	I0917 00:06:42.689234       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:06:42.689307       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	I0917 00:06:42.689381       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:06:42.689800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:06:42.690667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:06:42.694964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:06:42.699163       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:06:42.700692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:06:42.713986       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:06:42.717269       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:06:42.722438       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:06:42.724798       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:06:42.752877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [21dff06737d9] <==
	I0917 00:06:40.905839       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:06:40.968196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 00:06:44.060317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-198834&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0917 00:06:45.568444       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:06:45.568482       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:06:45.568583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:06:45.590735       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:06:45.590782       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:06:45.596121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:06:45.596463       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:06:45.596508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:45.597774       1 config.go:200] "Starting service config controller"
	I0917 00:06:45.597791       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:06:45.597883       1 config.go:309] "Starting node config controller"
	I0917 00:06:45.597987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:06:45.598035       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:06:45.598042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:06:45.598039       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:06:45.598057       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:06:45.698355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:06:45.698442       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:06:45.698447       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:06:45.698470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [371ff065d1df] <==
	I0917 00:06:34.304210       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:06:39.358570       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 00:06:39.358610       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 00:06:39.358624       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:06:39.358634       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:06:39.390353       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:06:39.390375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:39.392538       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392576       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:06:39.392961       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:06:39.493239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	I0917 00:06:14.797858       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:06:14.797982       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:14.797862       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:06:14.798018       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:06:14.798047       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:06:14.798073       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:12:13 ha-198834 kubelet[1349]: E0917 00:12:13.466898    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:23 ha-198834 kubelet[1349]: E0917 00:12:23.472100    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:23 ha-198834 kubelet[1349]: E0917 00:12:23.472193    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:33 ha-198834 kubelet[1349]: E0917 00:12:33.476036    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:33 ha-198834 kubelet[1349]: E0917 00:12:33.476130    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:43 ha-198834 kubelet[1349]: E0917 00:12:43.482413    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:43 ha-198834 kubelet[1349]: E0917 00:12:43.482518    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:53 ha-198834 kubelet[1349]: E0917 00:12:53.487015    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:53 ha-198834 kubelet[1349]: E0917 00:12:53.487127    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:13:03 ha-198834 kubelet[1349]: E0917 00:13:03.492319    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:03 ha-198834 kubelet[1349]: E0917 00:13:03.492420    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:13 ha-198834 kubelet[1349]: E0917 00:13:13.496175    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:13 ha-198834 kubelet[1349]: E0917 00:13:13.496282    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:23 ha-198834 kubelet[1349]: E0917 00:13:23.501136    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:23 ha-198834 kubelet[1349]: E0917 00:13:23.501231    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:33 ha-198834 kubelet[1349]: E0917 00:13:33.507713    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:33 ha-198834 kubelet[1349]: E0917 00:13:33.507829    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:43 ha-198834 kubelet[1349]: E0917 00:13:43.509754    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:43 ha-198834 kubelet[1349]: E0917 00:13:43.509855    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:53 ha-198834 kubelet[1349]: E0917 00:13:53.513005    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:53 ha-198834 kubelet[1349]: E0917 00:13:53.513112    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:14:03 ha-198834 kubelet[1349]: E0917 00:14:03.518517    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:03 ha-198834 kubelet[1349]: E0917 00:14:03.518636    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315961 maxSize=10485760
	Sep 17 00:14:13 ha-198834 kubelet[1349]: E0917 00:14:13.521966    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:13 ha-198834 kubelet[1349]: E0917 00:14:13.522077    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315961 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (503.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 node delete m03 --alsologtostderr -v 5: (8.48239753s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (516.210953ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:14:24.995445  787677 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:14:24.995729  787677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:24.995740  787677 out.go:374] Setting ErrFile to fd 2...
	I0917 00:14:24.995746  787677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:24.996006  787677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:14:24.996224  787677 out.go:368] Setting JSON to false
	I0917 00:14:24.996250  787677 mustload.go:65] Loading cluster: ha-198834
	I0917 00:14:24.996384  787677 notify.go:220] Checking for updates...
	I0917 00:14:24.996705  787677 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:14:24.996743  787677 status.go:174] checking status of ha-198834 ...
	I0917 00:14:24.997255  787677 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:14:25.015835  787677 status.go:371] ha-198834 host status = "Running" (err=<nil>)
	I0917 00:14:25.015879  787677 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:14:25.016228  787677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:14:25.034479  787677 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:14:25.034808  787677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:25.034861  787677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:25.052431  787677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:25.146468  787677 ssh_runner.go:195] Run: systemctl --version
	I0917 00:14:25.151106  787677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:14:25.163239  787677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:14:25.218314  787677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-09-17 00:14:25.20875267 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:14:25.218973  787677 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:14:25.219012  787677 api_server.go:166] Checking apiserver status ...
	I0917 00:14:25.219073  787677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:14:25.232022  787677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1915/cgroup
	W0917 00:14:25.242401  787677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1915/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:14:25.242463  787677 ssh_runner.go:195] Run: ls
	I0917 00:14:25.246758  787677 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:14:25.251256  787677 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:14:25.251281  787677 status.go:463] ha-198834 apiserver status = Running (err=<nil>)
	I0917 00:14:25.251291  787677 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:14:25.251307  787677 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:14:25.251534  787677 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:14:25.269940  787677 status.go:371] ha-198834-m02 host status = "Running" (err=<nil>)
	I0917 00:14:25.269966  787677 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:14:25.270284  787677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:14:25.290029  787677 host.go:66] Checking if "ha-198834-m02" exists ...
	I0917 00:14:25.290391  787677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:25.290445  787677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:14:25.308268  787677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:14:25.401298  787677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:14:25.413968  787677 kubeconfig.go:125] found "ha-198834" server: "https://192.168.49.254:8443"
	I0917 00:14:25.413998  787677 api_server.go:166] Checking apiserver status ...
	I0917 00:14:25.414040  787677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:14:25.425561  787677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3696/cgroup
	W0917 00:14:25.435402  787677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/3696/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:14:25.435450  787677 ssh_runner.go:195] Run: ls
	I0917 00:14:25.439080  787677 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:14:25.443334  787677 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:14:25.443359  787677 status.go:463] ha-198834-m02 apiserver status = Running (err=<nil>)
	I0917 00:14:25.443371  787677 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:14:25.443391  787677 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:14:25.443643  787677 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:14:25.462034  787677 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:14:25.462055  787677 status.go:384] host is not running, skipping remaining checks
	I0917 00:14:25.462061  787677 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 767393,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:06:25.645261111Z",
	            "FinishedAt": "2025-09-17T00:06:25.028586858Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c4867649ec3bf0587f9374f9f6dd9a46e1de12efb67420295d89335c703f889",
	            "SandboxKey": "/var/run/docker/netns/4c4867649ec3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:18:73:7c:dc:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "72aabe87a74799f11ad2c9fa1888331ed148259ce868576244b9fb8348ce4fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.168895446s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt                                                           │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ node    │ ha-198834 node stop m02 --alsologtostderr -v 5                                                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ node    │ ha-198834 node start m02 --alsologtostderr -v 5                                                                                    │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:05 UTC │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │                     │
	│ stop    │ ha-198834 stop --alsologtostderr -v 5                                                                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │ 17 Sep 25 00:06 UTC │
	│ start   │ ha-198834 start --wait true --alsologtostderr -v 5                                                                                 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:06 UTC │                     │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │                     │
	│ node    │ ha-198834 node delete m03 --alsologtostderr -v 5                                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │ 17 Sep 25 00:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:06:25
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:06:25.424279  767194 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:06:25.424573  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424581  767194 out.go:374] Setting ErrFile to fd 2...
	I0917 00:06:25.424586  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424775  767194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:06:25.425286  767194 out.go:368] Setting JSON to false
	I0917 00:06:25.426324  767194 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10118,"bootTime":1758057468,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:06:25.426427  767194 start.go:140] virtualization: kvm guest
	I0917 00:06:25.428578  767194 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:06:25.430211  767194 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:06:25.430246  767194 notify.go:220] Checking for updates...
	I0917 00:06:25.432570  767194 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:06:25.433820  767194 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:25.435087  767194 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:06:25.436546  767194 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:06:25.437859  767194 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:06:25.439704  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:25.439894  767194 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:06:25.464302  767194 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:06:25.464438  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.516697  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.50681521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.516812  767194 docker.go:318] overlay module found
	I0917 00:06:25.518746  767194 out.go:179] * Using the docker driver based on existing profile
	I0917 00:06:25.519979  767194 start.go:304] selected driver: docker
	I0917 00:06:25.519997  767194 start.go:918] validating driver "docker" against &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.520122  767194 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:06:25.520208  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.572516  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.563271649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.573652  767194 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:06:25.573697  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:25.573785  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:25.573870  767194 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.576437  767194 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0917 00:06:25.577616  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:25.578818  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:25.579785  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:25.579821  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:25.579826  767194 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:06:25.579871  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:25.579979  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:25.579993  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:25.580143  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.599791  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:25.599812  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:25.599832  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:25.599862  767194 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:25.599948  767194 start.go:364] duration metric: took 62.805µs to acquireMachinesLock for "ha-198834"
	I0917 00:06:25.599973  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:25.599982  767194 fix.go:54] fixHost starting: 
	I0917 00:06:25.600220  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.616766  767194 fix.go:112] recreateIfNeeded on ha-198834: state=Stopped err=<nil>
	W0917 00:06:25.616794  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:25.618968  767194 out.go:252] * Restarting existing docker container for "ha-198834" ...
	I0917 00:06:25.619043  767194 cli_runner.go:164] Run: docker start ha-198834
	I0917 00:06:25.855847  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.873957  767194 kic.go:430] container "ha-198834" state is running.
	I0917 00:06:25.874450  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:25.892189  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.892415  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:25.892480  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:25.912009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:25.912263  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:25.912277  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:25.912887  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59988->127.0.0.1:32813: read: connection reset by peer
	I0917 00:06:29.050047  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.050078  767194 ubuntu.go:182] provisioning hostname "ha-198834"
	I0917 00:06:29.050148  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.067712  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.067965  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.067980  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0917 00:06:29.215970  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.216043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.234106  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.234329  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.234345  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:29.370392  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:29.370431  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:29.370460  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:29.370469  767194 provision.go:84] configureAuth start
	I0917 00:06:29.370526  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:29.387543  767194 provision.go:143] copyHostCerts
	I0917 00:06:29.387579  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387610  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:29.387629  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387709  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:29.387817  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387848  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:29.387857  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387927  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:29.388004  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388027  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:29.388036  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388076  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:29.388269  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0917 00:06:29.680052  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:29.680112  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:29.680162  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.697396  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:29.794745  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:29.794807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:06:29.818846  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:29.818935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:29.843109  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:29.843177  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:06:29.867681  767194 provision.go:87] duration metric: took 497.192274ms to configureAuth
	I0917 00:06:29.867713  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:29.867938  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:29.867986  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.885190  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.885426  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.885443  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:30.020557  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:30.020583  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:30.020695  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:30.020755  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.038274  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.038492  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.038556  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:30.187120  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:30.187195  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.205293  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.205508  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.205531  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:30.346335  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:30.346367  767194 machine.go:96] duration metric: took 4.453936173s to provisionDockerMachine
	I0917 00:06:30.346383  767194 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0917 00:06:30.346398  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:30.346454  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:30.346492  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.363443  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.460028  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:30.463596  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:30.463625  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:30.463633  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:30.463639  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:30.463650  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:30.463700  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:30.463783  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:30.463796  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:30.463882  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:30.472864  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:30.497731  767194 start.go:296] duration metric: took 151.329262ms for postStartSetup
	I0917 00:06:30.497818  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:30.497853  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.515030  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.607057  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:30.611598  767194 fix.go:56] duration metric: took 5.011609188s for fixHost
	I0917 00:06:30.611632  767194 start.go:83] releasing machines lock for "ha-198834", held for 5.011665153s
	I0917 00:06:30.611691  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:30.629667  767194 ssh_runner.go:195] Run: cat /version.json
	I0917 00:06:30.629691  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:30.629719  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.629746  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.648073  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.648707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.812105  767194 ssh_runner.go:195] Run: systemctl --version
	I0917 00:06:30.816966  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:30.821509  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:30.840562  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:30.840635  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:30.850098  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:30.850133  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:30.850174  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:30.850289  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:30.867420  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:30.877948  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:30.888651  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:30.888731  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:30.899002  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.909052  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:30.918885  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.928779  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:30.938579  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:30.949499  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:30.960372  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:30.971253  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:30.980460  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:30.989781  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.059433  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:31.134046  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:31.134104  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:31.134189  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:31.147025  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.158451  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:31.177473  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.189232  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:31.201624  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:31.218917  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:31.222505  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:31.231136  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:31.249756  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:31.318828  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:31.386194  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:31.386293  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:31.405146  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:31.416620  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.483436  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:06:32.289053  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:06:32.300858  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:06:32.312042  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:06:32.323965  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.335721  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:06:32.399500  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:06:32.463504  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.532114  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:06:32.554184  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:06:32.565656  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.632393  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:06:32.706727  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.718700  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:06:32.718779  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:06:32.722502  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:06:32.722558  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:06:32.725864  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:06:32.759463  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:06:32.759531  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.784419  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.811577  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:06:32.811654  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:06:32.828274  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:06:32.832384  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:32.844198  767194 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:06:32.844338  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:32.844391  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.866962  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.866988  767194 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:06:32.867045  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.888238  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.888260  767194 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:06:32.888271  767194 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0917 00:06:32.888408  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:06:32.888467  767194 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:06:32.937957  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:32.937987  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:32.937999  767194 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:06:32.938023  767194 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:06:32.938138  767194 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:06:32.938157  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:06:32.938196  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:06:32.951493  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:32.951590  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:06:32.951639  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:06:32.960559  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:06:32.960633  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:06:32.969398  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0917 00:06:32.986997  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:06:33.005302  767194 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0917 00:06:33.023722  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:06:33.042510  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:06:33.046353  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:33.057738  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:33.121569  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:06:33.146613  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0917 00:06:33.146635  767194 certs.go:194] generating shared ca certs ...
	I0917 00:06:33.146655  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.146819  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:06:33.146861  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:06:33.146872  767194 certs.go:256] generating profile certs ...
	I0917 00:06:33.147007  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:06:33.147039  767194 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731
	I0917 00:06:33.147053  767194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:06:33.244684  767194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 ...
	I0917 00:06:33.244725  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731: {Name:mkeb1335a8dc05724d212e3f3c2f54f358e1623c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.244951  767194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 ...
	I0917 00:06:33.244976  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731: {Name:mkb539de1a460dc24807c303f56b400b0045d38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.245116  767194 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0917 00:06:33.245304  767194 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0917 00:06:33.245488  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:06:33.245509  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:06:33.245530  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:06:33.245548  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:06:33.245569  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:06:33.245589  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:06:33.245603  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:06:33.245616  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:06:33.245631  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:06:33.245698  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:06:33.245742  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:06:33.245759  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:06:33.245789  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:06:33.245819  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:06:33.245852  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:06:33.245931  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:33.245973  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.246001  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.246019  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.246713  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:06:33.280935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:06:33.310873  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:06:33.335758  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:06:33.364379  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:06:33.390832  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:06:33.415955  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:06:33.440057  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:06:33.463203  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:06:33.486818  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:06:33.510617  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:06:33.534829  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:06:33.553186  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:06:33.558602  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:06:33.568556  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572286  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572354  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.579085  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:06:33.588476  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:06:33.598074  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601602  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601665  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.608370  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:06:33.617493  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:06:33.626827  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630358  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630412  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.637101  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:06:33.645992  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:06:33.649484  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:06:33.657172  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:06:33.664432  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:06:33.673579  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:06:33.681621  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:06:33.690060  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:06:33.697708  767194 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:33.697865  767194 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:06:33.723793  767194 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:06:33.738005  767194 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:06:33.738035  767194 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:06:33.738100  767194 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:06:33.751261  767194 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:33.751774  767194 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-198834" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.751968  767194 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-661878/kubeconfig needs updating (will repair): [kubeconfig missing "ha-198834" cluster setting kubeconfig missing "ha-198834" context setting]
	I0917 00:06:33.752337  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.752804  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:06:33.753302  767194 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:06:33.753319  767194 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:06:33.753323  767194 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:06:33.753327  767194 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:06:33.753332  767194 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:06:33.753384  767194 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:06:33.753793  767194 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:06:33.766494  767194 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:06:33.766524  767194 kubeadm.go:593] duration metric: took 28.480766ms to restartPrimaryControlPlane
	I0917 00:06:33.766536  767194 kubeadm.go:394] duration metric: took 68.837067ms to StartCluster
	I0917 00:06:33.766560  767194 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.766635  767194 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.767596  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.767874  767194 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:06:33.767916  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:06:33.767929  767194 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:06:33.768219  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.771075  767194 out.go:179] * Enabled addons: 
	I0917 00:06:33.772321  767194 addons.go:514] duration metric: took 4.387344ms for enable addons: enabled=[]
	I0917 00:06:33.772363  767194 start.go:246] waiting for cluster config update ...
	I0917 00:06:33.772375  767194 start.go:255] writing updated cluster config ...
	I0917 00:06:33.774041  767194 out.go:203] 
	I0917 00:06:33.775488  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.775605  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.777754  767194 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0917 00:06:33.779232  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:33.780466  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:33.781663  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:33.781696  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:33.781785  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:33.781814  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:33.781827  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:33.782011  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.808184  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:33.808211  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:33.808230  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:33.808264  767194 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:33.808324  767194 start.go:364] duration metric: took 41.8µs to acquireMachinesLock for "ha-198834-m02"
	I0917 00:06:33.808349  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:33.808357  767194 fix.go:54] fixHost starting: m02
	I0917 00:06:33.808657  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:33.830576  767194 fix.go:112] recreateIfNeeded on ha-198834-m02: state=Stopped err=<nil>
	W0917 00:06:33.830617  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:33.832420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m02" ...
	I0917 00:06:33.832507  767194 cli_runner.go:164] Run: docker start ha-198834-m02
	I0917 00:06:34.153635  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:34.174085  767194 kic.go:430] container "ha-198834-m02" state is running.
	I0917 00:06:34.174485  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:34.193433  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:34.193710  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:34.193778  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:34.214780  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:34.215097  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:34.215113  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:34.215694  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38290->127.0.0.1:32818: read: connection reset by peer
	I0917 00:06:37.354066  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.354095  767194 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0917 00:06:37.354152  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.371082  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.371306  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.371320  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0917 00:06:37.519883  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.519999  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.537320  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.537534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.537550  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:37.672583  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:37.672613  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:37.672631  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:37.672648  767194 provision.go:84] configureAuth start
	I0917 00:06:37.672696  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:37.689646  767194 provision.go:143] copyHostCerts
	I0917 00:06:37.689686  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689726  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:37.689739  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689816  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:37.689949  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.689980  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:37.689988  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.690037  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:37.690112  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690144  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:37.690151  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690194  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:37.690275  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0917 00:06:37.816978  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:37.817061  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:37.817110  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.833876  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:37.931727  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:37.931807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:37.957434  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:37.957498  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:06:37.982656  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:37.982715  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:06:38.008383  767194 provision.go:87] duration metric: took 335.719749ms to configureAuth
	I0917 00:06:38.008424  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:38.008674  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:38.008734  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.025557  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.025785  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.025797  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:38.163170  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:38.163196  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:38.163371  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:38.163449  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.185210  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.185534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.185648  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:38.356034  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:38.356160  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.375350  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.375668  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.375699  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:50.199822  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-17 00:04:28.867992287 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-17 00:06:38.349897889 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 00:06:50.199856  767194 machine.go:96] duration metric: took 16.006130584s to provisionDockerMachine
	I0917 00:06:50.199874  767194 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0917 00:06:50.199898  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:50.199991  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:50.200037  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.231846  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.352925  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:50.364867  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:50.365109  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:50.365165  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:50.365182  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:50.365203  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:50.365613  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:50.365774  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:50.365791  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:50.366045  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:50.388970  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:50.439295  767194 start.go:296] duration metric: took 239.401963ms for postStartSetup
	I0917 00:06:50.439403  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:50.439460  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.471007  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.602680  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:50.622483  767194 fix.go:56] duration metric: took 16.814116597s for fixHost
	I0917 00:06:50.622519  767194 start.go:83] releasing machines lock for "ha-198834-m02", held for 16.814180436s
	I0917 00:06:50.622611  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:50.653586  767194 out.go:179] * Found network options:
	I0917 00:06:50.656159  767194 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:06:50.657611  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:06:50.657663  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:06:50.657748  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:50.657820  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.658056  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:50.658130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.695981  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.696302  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.813556  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:50.945454  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:50.945549  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:50.963173  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:50.963207  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:50.963244  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:50.963393  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:51.026654  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:51.062543  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:51.084179  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:51.084245  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:51.116429  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.134652  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:51.149737  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.178368  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:51.192765  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:51.210476  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:51.239805  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:51.263323  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:51.278110  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:51.292395  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:51.494387  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:51.834314  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:51.834371  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:51.834425  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:51.865409  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.888868  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:51.925439  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.950993  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:51.977155  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:52.018179  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:52.023424  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:52.036424  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:52.064244  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:52.246651  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:52.441476  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:52.441527  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:52.483989  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:52.501544  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:52.690125  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:09.204303  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.51413344s)
	I0917 00:07:09.204382  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:09.225679  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:09.253125  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:09.286728  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:09.309012  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:09.445797  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:09.588443  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.726437  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:09.759063  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:09.787528  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.918052  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:10.070248  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:10.091720  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:10.091835  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:10.104106  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:10.104210  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:10.109447  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:10.164469  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:10.164546  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.206116  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.251181  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:10.252538  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:10.254028  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:10.280282  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:10.286408  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:10.315665  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:10.317007  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:10.317340  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:10.349566  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:10.349878  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0917 00:07:10.349892  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:10.349931  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:10.350083  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:10.350139  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:10.350152  767194 certs.go:256] generating profile certs ...
	I0917 00:07:10.350273  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:10.350356  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.11b60fbb
	I0917 00:07:10.350412  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:10.350424  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:10.350443  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:10.350459  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:10.350474  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:10.350489  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:10.350505  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:10.350519  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:10.350532  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:10.350613  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:10.350656  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:10.350669  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:10.350702  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:10.350734  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:10.350774  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:10.350834  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:10.350874  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:10.350896  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:10.350924  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:10.350992  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:10.376726  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:10.493359  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:10.503886  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:10.534629  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:10.546504  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:10.568315  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:10.575486  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:10.605107  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:10.617021  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:10.651278  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:10.670568  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:10.696371  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:10.704200  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:10.732773  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:10.783862  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:10.831455  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:10.878503  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:10.928036  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:10.987893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:11.056094  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:11.123465  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:11.173229  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:11.218880  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:11.260399  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:11.310489  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:11.343030  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:11.378463  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:11.409826  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:11.456579  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:11.506523  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:11.540827  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:11.586318  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:11.600141  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:11.619035  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.625867  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.626054  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.639263  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:11.653785  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:11.672133  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681092  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681171  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.692463  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:11.707982  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:11.728502  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735225  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735287  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.745817  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:11.762496  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:11.768239  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:11.782100  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:11.796792  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:11.807595  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:11.818618  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:11.828824  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:11.839591  767194 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0917 00:07:11.839780  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:11.839824  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:11.839873  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:11.860859  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:11.861012  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:11.861079  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:11.879762  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:11.879865  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:11.896560  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:11.928442  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:11.958532  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:11.988805  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:11.997336  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:12.017582  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.177262  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.199102  767194 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:12.199621  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:12.202718  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:12.204066  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.356191  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.380335  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:12.380472  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:12.380985  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184442  767194 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0917 00:07:13.184486  767194 node_ready.go:38] duration metric: took 803.457553ms for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184510  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:13.184576  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:13.200500  767194 api_server.go:72] duration metric: took 1.001333458s to wait for apiserver process to appear ...
	I0917 00:07:13.200532  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:07:13.200555  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:07:13.213606  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:07:13.214727  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:07:13.214764  767194 api_server.go:131] duration metric: took 14.223116ms to wait for apiserver health ...
	I0917 00:07:13.214777  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:07:13.256193  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:07:13.256242  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256252  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256264  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.256270  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.256275  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.256280  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.256284  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.256289  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.256293  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.256298  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.256303  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.256308  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.256313  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.256318  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.256322  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.256327  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.256333  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.256338  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.256343  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.256347  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.256354  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:07:13.256358  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.256363  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.256369  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.256384  767194 system_pods.go:74] duration metric: took 41.59977ms to wait for pod list to return data ...
	I0917 00:07:13.256395  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:07:13.264291  767194 default_sa.go:45] found service account: "default"
	I0917 00:07:13.264324  767194 default_sa.go:55] duration metric: took 7.92079ms for default service account to be created ...
	I0917 00:07:13.264336  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:07:13.276453  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:07:13.276550  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276578  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276615  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.276644  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.276660  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.276676  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.276691  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.276720  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.276746  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.276763  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.276778  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.276793  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.276822  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.276857  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.276872  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.276885  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.277012  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.277120  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.277142  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.277175  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.277203  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:07:13.277208  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.277217  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.277225  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.277236  767194 system_pods.go:126] duration metric: took 12.891282ms to wait for k8s-apps to be running ...
	I0917 00:07:13.277249  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:07:13.277375  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:07:13.297819  767194 system_svc.go:56] duration metric: took 20.558975ms WaitForService to wait for kubelet
	I0917 00:07:13.297852  767194 kubeadm.go:578] duration metric: took 1.098690951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:07:13.297875  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:07:13.307482  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307521  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307539  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307677  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307701  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307723  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307753  767194 node_conditions.go:105] duration metric: took 9.872298ms to run NodePressure ...
	I0917 00:07:13.307786  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:07:13.307825  767194 start.go:255] writing updated cluster config ...
	I0917 00:07:13.310261  767194 out.go:203] 
	I0917 00:07:13.313110  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:13.313320  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.314968  767194 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0917 00:07:13.316602  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:07:13.318003  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:07:13.319806  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:07:13.319840  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:07:13.319992  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:07:13.320012  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:07:13.320251  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.320825  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:07:13.347419  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:07:13.347438  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:07:13.347454  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:07:13.347496  767194 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:07:13.347565  767194 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "ha-198834-m03"
	I0917 00:07:13.347590  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:07:13.347599  767194 fix.go:54] fixHost starting: m03
	I0917 00:07:13.347818  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.369824  767194 fix.go:112] recreateIfNeeded on ha-198834-m03: state=Stopped err=<nil>
	W0917 00:07:13.369863  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:07:13.371420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m03" ...
	I0917 00:07:13.371502  767194 cli_runner.go:164] Run: docker start ha-198834-m03
	I0917 00:07:13.655593  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.677720  767194 kic.go:430] container "ha-198834-m03" state is running.
	I0917 00:07:13.678397  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:13.698869  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.699223  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:07:13.699297  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:13.720009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:13.720402  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:13.720423  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:07:13.721130  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37410->127.0.0.1:32823: read: connection reset by peer
	I0917 00:07:16.888288  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:16.888424  767194 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0917 00:07:16.888511  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:16.916245  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:16.916715  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:16.916774  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0917 00:07:17.072762  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:17.072849  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.090683  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.090891  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.090926  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:07:17.226615  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.226655  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:07:17.226674  767194 ubuntu.go:190] setting up certificates
	I0917 00:07:17.226686  767194 provision.go:84] configureAuth start
	I0917 00:07:17.226737  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:17.243928  767194 provision.go:143] copyHostCerts
	I0917 00:07:17.243981  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244016  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:07:17.244028  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244117  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:07:17.244225  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244251  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:07:17.244261  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244308  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:07:17.244380  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244407  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:07:17.244416  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244453  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:07:17.244535  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0917 00:07:17.292018  767194 provision.go:177] copyRemoteCerts
	I0917 00:07:17.292080  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:07:17.292117  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.308563  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:17.405828  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:07:17.405893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:07:17.431262  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:07:17.431334  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:07:17.455746  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:07:17.455816  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:07:17.480475  767194 provision.go:87] duration metric: took 253.772124ms to configureAuth
	I0917 00:07:17.480509  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:07:17.480714  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:17.480758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.497376  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.497580  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.497596  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:07:17.633636  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:07:17.633662  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:07:17.633805  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:07:17.633874  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.651414  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.651681  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.651795  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:07:17.804026  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:07:17.804120  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.820842  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.821111  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.821138  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:07:17.969667  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.969696  767194 machine.go:96] duration metric: took 4.270454946s to provisionDockerMachine
	I0917 00:07:17.969711  767194 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0917 00:07:17.969724  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:07:17.969792  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:07:17.969841  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.990397  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.094261  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:07:18.098350  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:07:18.098388  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:07:18.098399  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:07:18.098407  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:07:18.098437  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:07:18.098499  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:07:18.098595  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:07:18.098610  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:07:18.098725  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:07:18.109219  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:18.136620  767194 start.go:296] duration metric: took 166.894782ms for postStartSetup
	I0917 00:07:18.136712  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:07:18.136758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.154707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.253452  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:07:18.258750  767194 fix.go:56] duration metric: took 4.91114427s for fixHost
	I0917 00:07:18.258774  767194 start.go:83] releasing machines lock for "ha-198834-m03", held for 4.911195885s
	I0917 00:07:18.258832  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:18.277160  767194 out.go:179] * Found network options:
	I0917 00:07:18.278351  767194 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:07:18.279348  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279378  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279406  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279425  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:07:18.279508  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:07:18.279557  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.279572  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:07:18.279629  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.297009  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.297357  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.461356  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:07:18.481814  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:07:18.481895  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:07:18.491087  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:07:18.491123  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.491159  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.491286  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:18.508046  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:07:18.518506  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:07:18.528724  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:07:18.528783  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:07:18.538901  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.548523  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:07:18.558495  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.568810  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:07:18.578635  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:07:18.588831  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:07:18.599026  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:07:18.608953  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:07:18.617676  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:07:18.629264  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:18.768747  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:07:18.967427  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.967485  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.967537  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:07:18.989293  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.005620  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:07:19.028890  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.040741  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:07:19.052468  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:19.069901  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:07:19.074018  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:07:19.084197  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:07:19.103723  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:07:19.235291  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:07:19.383050  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:07:19.383098  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:07:19.407054  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:07:19.421996  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:19.555630  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:50.623187  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.067490574s)
	I0917 00:07:50.623303  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:50.641030  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:50.658671  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:50.689413  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:50.703046  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:50.803170  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:50.901724  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:50.993561  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:51.017479  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:51.029545  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:51.119869  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:51.204520  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:51.216519  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:51.216591  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:51.220572  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:51.220624  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:51.224162  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:51.260602  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:51.260663  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.285759  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.312885  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:51.314109  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:51.315183  767194 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:07:51.316372  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:51.333621  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:51.337646  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:51.349463  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:51.349718  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:51.350027  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:51.366938  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:51.367221  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0917 00:07:51.367234  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:51.367257  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:51.367403  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:51.367473  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:51.367486  767194 certs.go:256] generating profile certs ...
	I0917 00:07:51.367595  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:51.367661  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0917 00:07:51.367716  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:51.367732  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:51.367752  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:51.367770  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:51.367789  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:51.367807  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:51.367832  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:51.367852  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:51.367869  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:51.367977  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:51.368020  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:51.368035  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:51.368076  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:51.368123  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:51.368156  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:51.368219  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:51.368269  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:51.368293  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:51.368313  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:51.368380  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:51.385113  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:51.473207  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:51.477858  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:51.490558  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:51.494138  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:51.507164  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:51.510845  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:51.523649  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:51.527311  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:51.539889  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:51.543488  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:51.557348  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:51.561022  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:51.575140  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:51.600746  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:51.626754  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:51.652660  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:51.677825  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:51.705137  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:51.740575  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:51.782394  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:51.821612  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:51.869185  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:51.909129  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:51.951856  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:51.980155  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:52.009170  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:52.038558  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:52.065379  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:52.093597  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:52.126589  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:52.157625  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:52.165683  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:52.182691  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188710  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188782  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.198794  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:52.213539  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:52.228292  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233558  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233622  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.242917  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:52.253428  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:52.264188  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268190  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268248  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.275453  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:52.285681  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:52.289640  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:52.297959  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:52.305434  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:52.313682  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:52.322656  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:52.330627  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:52.338015  767194 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0917 00:07:52.338141  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:52.338171  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:52.338230  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:52.353235  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:52.353321  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:52.353383  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:52.364085  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:52.364180  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:52.374489  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:52.394684  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:52.414928  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:52.435081  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:52.439302  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:52.451073  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.596707  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.610374  767194 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:52.610770  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:52.613091  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:52.614497  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.748599  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.767051  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:52.767139  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:52.767427  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771001  767194 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0917 00:07:52.771035  767194 node_ready.go:38] duration metric: took 3.579349ms for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771053  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:52.771108  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.272115  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.771243  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.271592  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.772153  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.272098  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.771893  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.271870  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.771931  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.271565  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.771663  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.272256  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.772138  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.272247  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.772002  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.271313  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.771538  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.272173  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.287212  767194 api_server.go:72] duration metric: took 8.676772616s to wait for apiserver process to appear ...
	I0917 00:08:01.287241  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:08:01.287263  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:08:01.291600  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:08:01.292548  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:08:01.292573  767194 api_server.go:131] duration metric: took 5.323927ms to wait for apiserver health ...
	I0917 00:08:01.292583  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:08:01.299296  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:08:01.299329  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.299337  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.299343  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.299349  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.299354  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.299360  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.299374  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.299383  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.299391  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.299396  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.299405  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.299410  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.299417  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.299426  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.299434  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.299440  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299452  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299462  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.299474  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.299483  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.299488  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.299495  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.299500  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.299507  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.299515  767194 system_pods.go:74] duration metric: took 6.92458ms to wait for pod list to return data ...
	I0917 00:08:01.299527  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:08:01.302268  767194 default_sa.go:45] found service account: "default"
	I0917 00:08:01.302290  767194 default_sa.go:55] duration metric: took 2.753628ms for default service account to be created ...
	I0917 00:08:01.302298  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:08:01.308262  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:08:01.308290  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.308297  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.308303  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.308308  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.308313  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.308318  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.308328  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.308338  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.308345  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.308353  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.308358  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.308366  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.308372  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.308382  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.308387  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.308399  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308406  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308416  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.308422  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.308430  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.308437  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.308444  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.308450  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.308457  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.308466  767194 system_pods.go:126] duration metric: took 6.162144ms to wait for k8s-apps to be running ...
	I0917 00:08:01.308477  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:08:01.308531  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:08:01.321442  767194 system_svc.go:56] duration metric: took 12.955822ms WaitForService to wait for kubelet
	I0917 00:08:01.321471  767194 kubeadm.go:578] duration metric: took 8.711043606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:08:01.321497  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:08:01.324862  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324889  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324932  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324940  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324955  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324965  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324975  767194 node_conditions.go:105] duration metric: took 3.472737ms to run NodePressure ...
	I0917 00:08:01.324991  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:08:01.325019  767194 start.go:255] writing updated cluster config ...
	I0917 00:08:01.327247  767194 out.go:203] 
	I0917 00:08:01.328726  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:08:01.328814  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.330445  767194 out.go:179] * Starting "ha-198834-m04" worker node in "ha-198834" cluster
	I0917 00:08:01.331747  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:08:01.333143  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:08:01.334280  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:08:01.334304  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:08:01.334314  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:08:01.334421  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:08:01.334508  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:08:01.334619  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.354767  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:08:01.354793  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:08:01.354813  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:08:01.354846  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:08:01.354978  767194 start.go:364] duration metric: took 110.48µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:08:01.355008  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:08:01.355019  767194 fix.go:54] fixHost starting: m04
	I0917 00:08:01.355235  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.371130  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:08:01.371158  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:08:01.373077  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:08:01.373153  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:08:01.641002  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.659099  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:08:01.659469  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:08:01.678005  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.678237  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:08:01.678290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:08:01.696742  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:08:01.697129  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0917 00:08:01.697150  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:08:01.697961  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37464->127.0.0.1:32828: read: connection reset by peer
	I0917 00:08:04.699300  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:07.701796  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:10.702633  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:13.704979  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:16.706261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:19.708223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:22.709325  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:25.709823  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:28.712117  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:31.713282  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:34.713692  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:37.714198  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:40.714526  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:43.715144  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:46.716332  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:49.718233  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:52.719842  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:55.720892  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:58.723145  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:01.724306  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:04.725156  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:07.727215  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:10.727548  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:13.729824  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:16.730195  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:19.732187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:22.733240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:25.734470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:28.736754  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:31.737738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:34.738212  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:37.740201  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:40.740629  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:43.742209  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:46.743230  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:49.743812  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:52.745547  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:55.746133  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:58.747347  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:01.748104  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:04.749384  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:07.751199  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:10.751605  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:13.754005  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:16.755405  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:19.757166  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:22.759220  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:25.760523  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:28.762825  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:31.764155  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:34.765318  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:37.767696  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:40.768111  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:43.768686  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:46.769636  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:49.771919  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:52.774246  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:55.774600  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:58.776146  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:11:01.777005  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:11:01.777043  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:11:01.777121  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.795827  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.795926  767194 machine.go:96] duration metric: took 3m0.117674387s to provisionDockerMachine
	I0917 00:11:01.796029  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:01.796065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.813326  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.813470  767194 retry.go:31] will retry after 152.729446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:01.966929  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.985775  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.985883  767194 retry.go:31] will retry after 397.218731ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:02.383496  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:02.403581  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:02.403703  767194 retry.go:31] will retry after 638.635672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.042529  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.059560  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.059686  767194 retry.go:31] will retry after 704.769086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.765290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.783784  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:03.783946  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:03.783981  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.784042  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:11:03.784097  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.801467  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.801578  767194 retry.go:31] will retry after 205.36367ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.008065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.026061  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.026199  767194 retry.go:31] will retry after 386.510214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.413871  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.432422  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.432542  767194 retry.go:31] will retry after 536.785381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.970143  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.987140  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.987259  767194 retry.go:31] will retry after 666.945417ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.654998  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:05.677613  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:05.677742  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677760  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677774  767194 fix.go:56] duration metric: took 3m4.322754949s for fixHost
	I0917 00:11:05.677787  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m4.322792335s
	W0917 00:11:05.677805  767194 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677949  767194 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677962  767194 start.go:729] Will try again in 5 seconds ...
	I0917 00:11:10.678811  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:11:10.678978  767194 start.go:364] duration metric: took 125.961µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:11:10.679012  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:11:10.679023  767194 fix.go:54] fixHost starting: m04
	I0917 00:11:10.679331  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.696334  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:11:10.696364  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:11:10.698674  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:11:10.698775  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:11:10.958441  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.976858  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:11:10.977249  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:11:10.996019  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:11:10.996308  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:11:10.996391  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:11:11.014622  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:11:11.014851  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0917 00:11:11.014862  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:11:11.015528  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40006->127.0.0.1:32833: read: connection reset by peer
	I0917 00:11:14.016664  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:17.018409  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:20.020719  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:23.023197  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:26.024253  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:29.026231  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:32.027234  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:35.028559  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:38.030180  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:41.030858  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:44.031976  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:47.032386  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:50.034183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:53.036585  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:56.037322  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:59.039174  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:02.040643  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:05.042141  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:08.044484  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:11.044866  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:14.045168  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:17.046169  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:20.047738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:23.049217  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:26.050288  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:29.052601  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:32.053185  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:35.054173  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:38.056589  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:41.056901  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:44.057410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:47.058856  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:50.059838  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:53.061223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:56.061941  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:59.064269  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:02.065654  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:05.066720  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:08.069008  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:11.070247  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:14.071588  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:17.073030  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:20.075194  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:23.075889  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:26.077261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:29.079216  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:32.080240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:35.080740  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:38.083067  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:41.083410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:44.084470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:47.085187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:50.087373  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:53.089182  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:56.090200  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:59.091003  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:02.092270  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:05.093183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:08.094399  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:11.094584  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:11.094618  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:14:11.094699  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.112633  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.112730  767194 machine.go:96] duration metric: took 3m0.1164066s to provisionDockerMachine
	I0917 00:14:11.112808  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:11.112848  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.131340  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.131459  767194 retry.go:31] will retry after 217.33373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.349947  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.367764  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.367886  767194 retry.go:31] will retry after 328.999453ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.697508  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.715227  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.715392  767194 retry.go:31] will retry after 827.670309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.544130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.562142  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:12.562261  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:12.562274  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.562322  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:14:12.562353  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.581698  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.581803  767194 retry.go:31] will retry after 257.155823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.839282  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.856512  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.856617  767194 retry.go:31] will retry after 258.093075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.115042  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.133383  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.133525  767194 retry.go:31] will retry after 435.275696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.569043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.587245  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.587350  767194 retry.go:31] will retry after 560.286621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.148585  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:14.167049  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:14.167159  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.167179  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.167190  767194 fix.go:56] duration metric: took 3m3.488169176s for fixHost
	I0917 00:14:14.167197  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m3.488205367s
	W0917 00:14:14.167315  767194 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-198834" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.169966  767194 out.go:203] 
	W0917 00:14:14.171309  767194 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.171324  767194 out.go:285] * 
	W0917 00:14:14.173015  767194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:14:14.174398  767194 out.go:203] 
	
	
	==> Docker <==
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Setting cgroupDriver systemd"
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 17 00:06:32 ha-198834 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-pstjp_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b47695e7722ae97363ea22c63f66096a6ecc511747e54aac5f8ef52c2bccc43f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/beb17aaed35c336b100468a8af1e4d5a446acc16a51b6d88c169b26f731e4d18/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a13ea6d24610a4b3fe0f24eb6ae80782a60d62b4d2d9232966b5779cbab4b54/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af005efeb3a09eef7fbb97f4b29e8c0d2980e77ba4c7ceccc514d8de19a0c461/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2edf887287bbf8068cce63b7faf1f32074cd90688f7befba7a02a4cb8b00d85f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:34 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2\""
	Sep 17 00:06:34 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000\""
	Sep 17 00:06:39 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/02337f9cf4b1297217a71f717a99d7fd2b400649baf91af0fe3e64f2ae3bf34b/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6d2a2147c23d6db38977d2b195118845bcf0f4b7b50bd65e59156087c8f4a36/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec5acf265466354c265f4a5a6c47300c16e052d876e5b879f13c8cb25513d1df/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd69455479fe49678a69e6c15e7428cf2e0933a67e62ce21b42adc2ddffbbc50/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4da814f488dc35aa80427876bce77b335fc3a2333320170df1e542d7dbf76b68/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:06:40 ha-198834 dockerd[794]: time="2025-09-17T00:06:40.918358490Z" level=info msg="ignoring event" container=c593c83411d202af565aa578ee9c507fe6076579aab28504b4f9fc77eebb5e49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13e9e86bdcc31c2473895f9f8e326522c316dee735315cefa4058543e1714435/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:52 ha-198834 dockerd[794]: time="2025-09-17T00:06:52.494377563Z" level=info msg="ignoring event" container=1625a23fd7f91dfa311956f9315bcae7fdde0540127a12f56cf5429b147e1f07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	64ab62b23e778       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       2                   a6d2a2147c23d       storage-provisioner
	ab70c5e50e54c       765655ea60781                                                                                         7 minutes ago       Running             kube-vip                  1                   2edf887287bbf       kube-vip-ha-198834
	bdc52003487f9       409467f978b4a                                                                                         7 minutes ago       Running             kindnet-cni               1                   13e9e86bdcc31       kindnet-h28vp
	c593c83411d20       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       1                   a6d2a2147c23d       storage-provisioner
	d130ec085d5ce       8c811b4aec35f                                                                                         7 minutes ago       Running             busybox                   1                   4da814f488dc3       busybox-7b57f96db7-pstjp
	19c8584dae1b9       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   3                   fd69455479fe4       coredns-66bc5c9577-5wx4k
	21dff06737d90       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                1                   02337f9cf4b12       kube-proxy-5tkhn
	8a501078c4170       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   3                   ec5acf2654663       coredns-66bc5c9577-mjbz6
	9f5475377594b       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            1                   b47695e7722ae       kube-apiserver-ha-198834
	1625a23fd7f91       765655ea60781                                                                                         7 minutes ago       Exited              kube-vip                  0                   2edf887287bbf       kube-vip-ha-198834
	e5f91b76238c9       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   1                   af005efeb3a09       kube-controller-manager-ha-198834
	371ff065d1dfd       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            1                   7a13ea6d24610       kube-scheduler-ha-198834
	7b047b1099553       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      1                   beb17aaed35c3       etcd-ha-198834
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         16 minutes ago      Exited              coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         16 minutes ago      Exited              coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              16 minutes ago      Exited              kindnet-cni               0                   f541f878be896       kindnet-h28vp
	2da683f529549       df0860106674d                                                                                         17 minutes ago      Exited              kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	4f536df8f44eb       a0af72f2ec6d6                                                                                         17 minutes ago      Exited              kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         17 minutes ago      Exited              kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         17 minutes ago      Exited              etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         17 minutes ago      Exited              kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [19c8584dae1b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53538 - 29295 "HINFO IN 9023489977302481875.6206531949632663336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037239604s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [8a501078c417] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35492 - 21170 "HINFO IN 5429275037699935078.1019057475364754304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034969536s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4f7ea59034e] <==
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:14:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f3c2828aef94f11bd80d984a3eb304b
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m40s                  kube-proxy       
	  Normal  Starting                 17m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    17m                    kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                    kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  Starting                 17m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                    kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           17m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           8m53s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  NodeHasSufficientMemory  7m53s (x8 over 7m53s)  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    7m53s (x8 over 7m53s)  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s (x7 over 7m53s)  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m44s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m53s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           5m59s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:14:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:25 +0000   Wed, 17 Sep 2025 00:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d9336414c044e558d42395caacb8496
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 16m                    kube-proxy       
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m53s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  Starting                 7m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m51s (x8 over 7m51s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m51s (x8 over 7m51s)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m51s (x7 over 7m51s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m44s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           6m53s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           5m59s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"warn","ts":"2025-09-17T00:06:21.816182Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816245Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:06:21.816263Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816182Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816283Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:06:21.816292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:06:21.816232Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:06:21.816324Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816342Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816364Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816409Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816435Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816472Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816689Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816711Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816726Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816752Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816950Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817063Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817099Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817120Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.819127Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:06:21.819183Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:06:21.819210Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:06:21.819240Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-198834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7b047b109955] <==
	{"level":"info","ts":"2025-09-17T00:07:58.906971Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.923403Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.926618Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.247606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:14:20.255084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:14:20.263476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:14:20.273255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39384","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:14:20.282657Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(5981864578030751937 12593026477526642892)"}
	{"level":"info","ts":"2025-09-17T00:14:20.284010Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b3d041dbb5a11c89","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-17T00:14:20.284049Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284086Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284124Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284164Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284185Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284211Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284417Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"context canceled"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284501Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b3d041dbb5a11c89","error":"failed to read b3d041dbb5a11c89 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-17T00:14:20.284527Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284665Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:14:20.284696Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284724Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284740Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284770Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.294982Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.296687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:38978","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:14:26 up  2:56,  0 users,  load average: 1.01, 1.49, 1.55
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:05:30.418641       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.418896       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:40.419001       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:05:40.419203       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:40.419213       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.419325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:40.419337       1 main.go:301] handling current node
	I0917 00:05:50.419127       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:50.419157       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:50.419382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:50.419397       1 main.go:301] handling current node
	I0917 00:05:50.419409       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:50.419413       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:00.422562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:00.422596       1 main.go:301] handling current node
	I0917 00:06:00.422611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:06:00.422616       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:00.422807       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:06:00.422815       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:06:10.425320       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:10.425358       1 main.go:301] handling current node
	I0917 00:06:10.425375       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:06:10.425381       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:10.425598       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:06:10.425613       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [bdc52003487f] <==
	I0917 00:13:41.571105       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:13:41.571718       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:41.571924       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:51.563112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:13:51.563147       1 main.go:301] handling current node
	I0917 00:13:51.563166       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:51.563171       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:13:51.563440       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:51.563450       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:01.562311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:01.562353       1 main.go:301] handling current node
	I0917 00:14:01.562369       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:01.562373       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:01.562589       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:01.562603       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:11.571668       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:11.571702       1 main.go:301] handling current node
	I0917 00:14:11.571718       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:11.571723       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:11.571936       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:11.571959       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:21.562648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:21.562681       1 main.go:301] handling current node
	I0917 00:14:21.562695       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:21.562699       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9f5475377594] <==
	E0917 00:07:13.174810       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174819       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174828       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174837       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174846       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.175006       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.175025       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-09-17T00:07:13.177063Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0007dd680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-09-17T00:07:13.177068Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00115f680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	E0917 00:07:13.177483       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.177936       1 watcher.go:335] watch chan error: etcdserver: no leader
	I0917 00:07:14.364054       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	W0917 00:07:43.229272       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0917 00:07:46.104655       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:03.309841       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:14.885894       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:27.376078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:26.628008       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:30.857365       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:39.501415       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:56.232261       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:57.532285       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:02.515292       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:58.658174       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:20.016953       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0917 00:06:21.818285       1 secure_serving.go:259] Stopped listening on [::]:8443
	I0917 00:06:21.818307       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:06:21.818343       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:06:21.818212       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 00:06:21.818343       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:06:21.818354       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0917 00:06:21.818404       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	E0917 00:06:21.818445       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:06:21.819652       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.966573ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-hxygcsz4tng6hmluvaoa4vlmha" result=null
	W0917 00:06:22.805712       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:06:22.862569       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-09-17T00:06:22.868061Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.868163       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.868276Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.869370Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0017f8960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	I0917 00:06:22.869404       1 cidrallocator.go:210] stopping ServiceCIDR Allocator Controller
	{"level":"warn","ts":"2025-09-17T00:06:22.869490Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125b680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.869797Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0018fc5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.870313Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ce1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.870382       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.870475Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ce1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.871365Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ed2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.871420       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.871506Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ed2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.875069Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002a014a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [e5f91b76238c] <==
	I0917 00:06:42.688133       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:06:42.688192       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:06:42.688272       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:06:42.688535       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:06:42.688667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:06:42.689165       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	I0917 00:06:42.689227       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834"
	I0917 00:06:42.689234       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:06:42.689307       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	I0917 00:06:42.689381       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:06:42.689800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:06:42.690667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:06:42.694964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:06:42.699163       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:06:42.700692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:06:42.713986       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:06:42.717269       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:06:42.722438       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:06:42.724798       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:06:42.752877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:14:22.716991       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717039       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717048       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717085       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717092       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	
	
	==> kube-proxy [21dff06737d9] <==
	I0917 00:06:40.905839       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:06:40.968196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 00:06:44.060317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-198834&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0917 00:06:45.568444       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:06:45.568482       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:06:45.568583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:06:45.590735       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:06:45.590782       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:06:45.596121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:06:45.596463       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:06:45.596508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:45.597774       1 config.go:200] "Starting service config controller"
	I0917 00:06:45.597791       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:06:45.597883       1 config.go:309] "Starting node config controller"
	I0917 00:06:45.597987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:06:45.598035       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:06:45.598042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:06:45.598039       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:06:45.598057       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:06:45.698355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:06:45.698442       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:06:45.698447       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:06:45.698470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [371ff065d1df] <==
	I0917 00:06:34.304210       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:06:39.358570       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 00:06:39.358610       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 00:06:39.358624       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:06:39.358634       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:06:39.390353       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:06:39.390375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:39.392538       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392576       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:06:39.392961       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:06:39.493239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	I0917 00:06:14.797858       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:06:14.797982       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:14.797862       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:06:14.798018       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:06:14.798047       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:06:14.798073       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:12:23 ha-198834 kubelet[1349]: E0917 00:12:23.472193    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:33 ha-198834 kubelet[1349]: E0917 00:12:33.476036    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:33 ha-198834 kubelet[1349]: E0917 00:12:33.476130    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:43 ha-198834 kubelet[1349]: E0917 00:12:43.482413    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:43 ha-198834 kubelet[1349]: E0917 00:12:43.482518    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:53 ha-198834 kubelet[1349]: E0917 00:12:53.487015    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:53 ha-198834 kubelet[1349]: E0917 00:12:53.487127    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:13:03 ha-198834 kubelet[1349]: E0917 00:13:03.492319    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:03 ha-198834 kubelet[1349]: E0917 00:13:03.492420    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:13 ha-198834 kubelet[1349]: E0917 00:13:13.496175    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:13 ha-198834 kubelet[1349]: E0917 00:13:13.496282    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:23 ha-198834 kubelet[1349]: E0917 00:13:23.501136    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:23 ha-198834 kubelet[1349]: E0917 00:13:23.501231    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:33 ha-198834 kubelet[1349]: E0917 00:13:33.507713    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:33 ha-198834 kubelet[1349]: E0917 00:13:33.507829    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:43 ha-198834 kubelet[1349]: E0917 00:13:43.509754    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:43 ha-198834 kubelet[1349]: E0917 00:13:43.509855    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:53 ha-198834 kubelet[1349]: E0917 00:13:53.513005    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:53 ha-198834 kubelet[1349]: E0917 00:13:53.513112    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:14:03 ha-198834 kubelet[1349]: E0917 00:14:03.518517    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:03 ha-198834 kubelet[1349]: E0917 00:14:03.518636    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315961 maxSize=10485760
	Sep 17 00:14:13 ha-198834 kubelet[1349]: E0917 00:14:13.521966    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:13 ha-198834 kubelet[1349]: E0917 00:14:13.522077    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315961 maxSize=10485760
	Sep 17 00:14:23 ha-198834 kubelet[1349]: E0917 00:14:23.527352    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:23 ha-198834 kubelet[1349]: E0917 00:14:23.527458    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42316126 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-xfzdd
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-198834 describe pod busybox-7b57f96db7-xfzdd
helpers_test.go:290: (dbg) kubectl --context ha-198834 describe pod busybox-7b57f96db7-xfzdd:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-xfzdd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z55j5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-z55j5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  10s               default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s               default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8s (x2 over 10s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (11.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-198834" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-198834\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-198834\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares
\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-198834\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"re
gistry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetP
ath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 767393,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:06:25.645261111Z",
	            "FinishedAt": "2025-09-17T00:06:25.028586858Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4c4867649ec3bf0587f9374f9f6dd9a46e1de12efb67420295d89335c703f889",
	            "SandboxKey": "/var/run/docker/netns/4c4867649ec3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:18:73:7c:dc:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "72aabe87a74799f11ad2c9fa1888331ed148259ce868576244b9fb8348ce4fc4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.233664087s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt                                                           │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ node    │ ha-198834 node stop m02 --alsologtostderr -v 5                                                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ node    │ ha-198834 node start m02 --alsologtostderr -v 5                                                                                    │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:05 UTC │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │                     │
	│ stop    │ ha-198834 stop --alsologtostderr -v 5                                                                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │ 17 Sep 25 00:06 UTC │
	│ start   │ ha-198834 start --wait true --alsologtostderr -v 5                                                                                 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:06 UTC │                     │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │                     │
	│ node    │ ha-198834 node delete m03 --alsologtostderr -v 5                                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │ 17 Sep 25 00:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:06:25
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:06:25.424279  767194 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:06:25.424573  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424581  767194 out.go:374] Setting ErrFile to fd 2...
	I0917 00:06:25.424586  767194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:06:25.424775  767194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:06:25.425286  767194 out.go:368] Setting JSON to false
	I0917 00:06:25.426324  767194 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10118,"bootTime":1758057468,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:06:25.426427  767194 start.go:140] virtualization: kvm guest
	I0917 00:06:25.428578  767194 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:06:25.430211  767194 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:06:25.430246  767194 notify.go:220] Checking for updates...
	I0917 00:06:25.432570  767194 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:06:25.433820  767194 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:25.435087  767194 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:06:25.436546  767194 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:06:25.437859  767194 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:06:25.439704  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:25.439894  767194 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:06:25.464302  767194 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:06:25.464438  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.516697  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.50681521 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.516812  767194 docker.go:318] overlay module found
	I0917 00:06:25.518746  767194 out.go:179] * Using the docker driver based on existing profile
	I0917 00:06:25.519979  767194 start.go:304] selected driver: docker
	I0917 00:06:25.519997  767194 start.go:918] validating driver "docker" against &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.520122  767194 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:06:25.520208  767194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:06:25.572516  767194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:06:25.563271649 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:06:25.573652  767194 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:06:25.573697  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:25.573785  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:25.573870  767194 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:25.576437  767194 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0917 00:06:25.577616  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:25.578818  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:25.579785  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:25.579821  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:25.579826  767194 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:06:25.579871  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:25.579979  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:25.579993  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:25.580143  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.599791  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:25.599812  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:25.599832  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:25.599862  767194 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:25.599948  767194 start.go:364] duration metric: took 62.805µs to acquireMachinesLock for "ha-198834"
	I0917 00:06:25.599973  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:25.599982  767194 fix.go:54] fixHost starting: 
	I0917 00:06:25.600220  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.616766  767194 fix.go:112] recreateIfNeeded on ha-198834: state=Stopped err=<nil>
	W0917 00:06:25.616794  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:25.618968  767194 out.go:252] * Restarting existing docker container for "ha-198834" ...
	I0917 00:06:25.619043  767194 cli_runner.go:164] Run: docker start ha-198834
	I0917 00:06:25.855847  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:06:25.873957  767194 kic.go:430] container "ha-198834" state is running.
	I0917 00:06:25.874450  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:25.892189  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:25.892415  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:25.892480  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:25.912009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:25.912263  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:25.912277  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:25.912887  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59988->127.0.0.1:32813: read: connection reset by peer
	I0917 00:06:29.050047  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.050078  767194 ubuntu.go:182] provisioning hostname "ha-198834"
	I0917 00:06:29.050148  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.067712  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.067965  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.067980  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0917 00:06:29.215970  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:06:29.216043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.234106  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.234329  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.234345  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:29.370392  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:29.370431  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:29.370460  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:29.370469  767194 provision.go:84] configureAuth start
	I0917 00:06:29.370526  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:29.387543  767194 provision.go:143] copyHostCerts
	I0917 00:06:29.387579  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387610  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:29.387629  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:29.387709  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:29.387817  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387848  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:29.387857  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:29.387927  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:29.388004  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388027  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:29.388036  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:29.388076  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:29.388269  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0917 00:06:29.680052  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:29.680112  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:29.680162  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.697396  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:29.794745  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:29.794807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:06:29.818846  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:29.818935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:29.843109  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:29.843177  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:06:29.867681  767194 provision.go:87] duration metric: took 497.192274ms to configureAuth
	I0917 00:06:29.867713  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:29.867938  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:29.867986  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:29.885190  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:29.885426  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:29.885443  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:30.020557  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:30.020583  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:30.020695  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:30.020755  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.038274  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.038492  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.038556  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:30.187120  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:30.187195  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.205293  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:30.205508  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0917 00:06:30.205531  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:30.346335  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:30.346367  767194 machine.go:96] duration metric: took 4.453936173s to provisionDockerMachine
	I0917 00:06:30.346383  767194 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0917 00:06:30.346398  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:30.346454  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:30.346492  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.363443  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.460028  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:30.463596  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:30.463625  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:30.463633  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:30.463639  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:30.463650  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:30.463700  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:30.463783  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:30.463796  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:30.463882  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:30.472864  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:30.497731  767194 start.go:296] duration metric: took 151.329262ms for postStartSetup
	I0917 00:06:30.497818  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:30.497853  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.515030  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.607057  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:30.611598  767194 fix.go:56] duration metric: took 5.011609188s for fixHost
	I0917 00:06:30.611632  767194 start.go:83] releasing machines lock for "ha-198834", held for 5.011665153s
	I0917 00:06:30.611691  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:06:30.629667  767194 ssh_runner.go:195] Run: cat /version.json
	I0917 00:06:30.629691  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:30.629719  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.629746  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:06:30.648073  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.648707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:06:30.812105  767194 ssh_runner.go:195] Run: systemctl --version
	I0917 00:06:30.816966  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:30.821509  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:30.840562  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:30.840635  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:30.850098  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:30.850133  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:30.850174  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:30.850289  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:30.867420  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:30.877948  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:30.888651  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:30.888731  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:30.899002  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.909052  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:30.918885  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:30.928779  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:30.938579  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:30.949499  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:30.960372  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:30.971253  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:30.980460  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:30.989781  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.059433  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:31.134046  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:31.134104  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:31.134189  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:31.147025  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.158451  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:31.177473  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:31.189232  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:31.201624  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:31.218917  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:31.222505  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:31.231136  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:31.249756  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:31.318828  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:31.386194  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:31.386293  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:31.405146  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:31.416620  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:31.483436  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:06:32.289053  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:06:32.300858  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:06:32.312042  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:06:32.323965  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.335721  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:06:32.399500  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:06:32.463504  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.532114  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:06:32.554184  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:06:32.565656  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:32.632393  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:06:32.706727  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:06:32.718700  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:06:32.718779  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:06:32.722502  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:06:32.722558  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:06:32.725864  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:06:32.759463  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:06:32.759531  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.784419  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:06:32.811577  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:06:32.811654  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:06:32.828274  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:06:32.832384  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:32.844198  767194 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:06:32.844338  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:32.844391  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.866962  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.866988  767194 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:06:32.867045  767194 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:06:32.888238  767194 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:06:32.888260  767194 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:06:32.888271  767194 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0917 00:06:32.888408  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:06:32.888467  767194 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:06:32.937957  767194 cni.go:84] Creating CNI manager for ""
	I0917 00:06:32.937987  767194 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:06:32.937999  767194 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:06:32.938023  767194 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:06:32.938138  767194 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:06:32.938157  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:06:32.938196  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:06:32.951493  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:32.951590  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:06:32.951639  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:06:32.960559  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:06:32.960633  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:06:32.969398  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0917 00:06:32.986997  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:06:33.005302  767194 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0917 00:06:33.023722  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:06:33.042510  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:06:33.046353  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:06:33.057738  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:33.121569  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:06:33.146613  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0917 00:06:33.146635  767194 certs.go:194] generating shared ca certs ...
	I0917 00:06:33.146655  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.146819  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:06:33.146861  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:06:33.146872  767194 certs.go:256] generating profile certs ...
	I0917 00:06:33.147007  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:06:33.147039  767194 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731
	I0917 00:06:33.147053  767194 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:06:33.244684  767194 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 ...
	I0917 00:06:33.244725  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731: {Name:mkeb1335a8dc05724d212e3f3c2f54f358e1623c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.244951  767194 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 ...
	I0917 00:06:33.244976  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731: {Name:mkb539de1a460dc24807c303f56b400b0045d38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.245116  767194 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0917 00:06:33.245304  767194 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.f8364731 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0917 00:06:33.245488  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:06:33.245509  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:06:33.245530  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:06:33.245548  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:06:33.245569  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:06:33.245589  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:06:33.245603  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:06:33.245616  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:06:33.245631  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:06:33.245698  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:06:33.245742  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:06:33.245759  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:06:33.245789  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:06:33.245819  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:06:33.245852  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:06:33.245931  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:33.245973  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.246001  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.246019  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.246713  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:06:33.280935  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:06:33.310873  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:06:33.335758  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:06:33.364379  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:06:33.390832  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:06:33.415955  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:06:33.440057  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:06:33.463203  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:06:33.486818  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:06:33.510617  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:06:33.534829  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:06:33.553186  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:06:33.558602  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:06:33.568556  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572286  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.572354  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:06:33.579085  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:06:33.588476  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:06:33.598074  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601602  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.601665  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:06:33.608370  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:06:33.617493  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:06:33.626827  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630358  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.630412  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:06:33.637101  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:06:33.645992  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:06:33.649484  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:06:33.657172  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:06:33.664432  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:06:33.673579  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:06:33.681621  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:06:33.690060  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:06:33.697708  767194 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:06:33.697865  767194 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:06:33.723793  767194 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:06:33.738005  767194 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:06:33.738035  767194 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:06:33.738100  767194 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:06:33.751261  767194 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:06:33.751774  767194 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-198834" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.751968  767194 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-661878/kubeconfig needs updating (will repair): [kubeconfig missing "ha-198834" cluster setting kubeconfig missing "ha-198834" context setting]
	I0917 00:06:33.752337  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.752804  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:06:33.753302  767194 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:06:33.753319  767194 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:06:33.753323  767194 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:06:33.753327  767194 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:06:33.753332  767194 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:06:33.753384  767194 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:06:33.753793  767194 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:06:33.766494  767194 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:06:33.766524  767194 kubeadm.go:593] duration metric: took 28.480766ms to restartPrimaryControlPlane
	I0917 00:06:33.766536  767194 kubeadm.go:394] duration metric: took 68.837067ms to StartCluster
	I0917 00:06:33.766560  767194 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.766635  767194 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:06:33.767596  767194 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:06:33.767874  767194 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:06:33.767916  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:06:33.767929  767194 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:06:33.768219  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.771075  767194 out.go:179] * Enabled addons: 
	I0917 00:06:33.772321  767194 addons.go:514] duration metric: took 4.387344ms for enable addons: enabled=[]
	I0917 00:06:33.772363  767194 start.go:246] waiting for cluster config update ...
	I0917 00:06:33.772375  767194 start.go:255] writing updated cluster config ...
	I0917 00:06:33.774041  767194 out.go:203] 
	I0917 00:06:33.775488  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:33.775605  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.777754  767194 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0917 00:06:33.779232  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:06:33.780466  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:06:33.781663  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:06:33.781696  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:06:33.781785  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:06:33.781814  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:06:33.781827  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:06:33.782011  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:33.808184  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:06:33.808211  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:06:33.808230  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:06:33.808264  767194 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:06:33.808324  767194 start.go:364] duration metric: took 41.8µs to acquireMachinesLock for "ha-198834-m02"
	I0917 00:06:33.808349  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:06:33.808357  767194 fix.go:54] fixHost starting: m02
	I0917 00:06:33.808657  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:33.830576  767194 fix.go:112] recreateIfNeeded on ha-198834-m02: state=Stopped err=<nil>
	W0917 00:06:33.830617  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:06:33.832420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m02" ...
	I0917 00:06:33.832507  767194 cli_runner.go:164] Run: docker start ha-198834-m02
	I0917 00:06:34.153635  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:06:34.174085  767194 kic.go:430] container "ha-198834-m02" state is running.
	I0917 00:06:34.174485  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:34.193433  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:06:34.193710  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:06:34.193778  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:34.214780  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:34.215097  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:34.215113  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:06:34.215694  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38290->127.0.0.1:32818: read: connection reset by peer
	I0917 00:06:37.354066  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.354095  767194 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0917 00:06:37.354152  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.371082  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.371306  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.371320  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0917 00:06:37.519883  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:06:37.519999  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.537320  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:37.537534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:37.537550  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:06:37.672583  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:06:37.672613  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:06:37.672631  767194 ubuntu.go:190] setting up certificates
	I0917 00:06:37.672648  767194 provision.go:84] configureAuth start
	I0917 00:06:37.672696  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:37.689646  767194 provision.go:143] copyHostCerts
	I0917 00:06:37.689686  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689726  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:06:37.689739  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:06:37.689816  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:06:37.689949  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.689980  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:06:37.689988  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:06:37.690037  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:06:37.690112  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690144  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:06:37.690151  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:06:37.690194  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:06:37.690275  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0917 00:06:37.816978  767194 provision.go:177] copyRemoteCerts
	I0917 00:06:37.817061  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:06:37.817110  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:37.833876  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:37.931727  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:06:37.931807  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:06:37.957434  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:06:37.957498  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:06:37.982656  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:06:37.982715  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:06:38.008383  767194 provision.go:87] duration metric: took 335.719749ms to configureAuth
	I0917 00:06:38.008424  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:06:38.008674  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:06:38.008734  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.025557  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.025785  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.025797  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:06:38.163170  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:06:38.163196  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:06:38.163371  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:06:38.163449  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.185210  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.185534  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.185648  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:06:38.356034  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:06:38.356160  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:38.375350  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:06:38.375668  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0917 00:06:38.375699  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:06:50.199822  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-17 00:04:28.867992287 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-17 00:06:38.349897889 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 00:06:50.199856  767194 machine.go:96] duration metric: took 16.006130584s to provisionDockerMachine
	I0917 00:06:50.199874  767194 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0917 00:06:50.199898  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:06:50.199991  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:06:50.200037  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.231846  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.352925  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:06:50.364867  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:06:50.365109  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:06:50.365165  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:06:50.365182  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:06:50.365203  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:06:50.365613  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:06:50.365774  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:06:50.365791  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:06:50.366045  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:06:50.388970  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:06:50.439295  767194 start.go:296] duration metric: took 239.401963ms for postStartSetup
	I0917 00:06:50.439403  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:06:50.439460  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.471007  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.602680  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:06:50.622483  767194 fix.go:56] duration metric: took 16.814116597s for fixHost
	I0917 00:06:50.622519  767194 start.go:83] releasing machines lock for "ha-198834-m02", held for 16.814180436s
	I0917 00:06:50.622611  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:06:50.653586  767194 out.go:179] * Found network options:
	I0917 00:06:50.656159  767194 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:06:50.657611  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:06:50.657663  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:06:50.657748  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:06:50.657820  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.658056  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:06:50.658130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:06:50.695981  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.696302  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:06:50.813556  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:06:50.945454  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:06:50.945549  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:06:50.963173  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:06:50.963207  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:50.963244  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:50.963393  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:51.026654  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:06:51.062543  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:06:51.084179  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:06:51.084245  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:06:51.116429  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.134652  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:06:51.149737  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:06:51.178368  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:06:51.192765  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:06:51.210476  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:06:51.239805  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:06:51.263323  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:06:51.278110  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:06:51.292395  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:51.494387  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:06:51.834314  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:06:51.834371  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:06:51.834425  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:06:51.865409  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.888868  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:06:51.925439  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:06:51.950993  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:06:51.977155  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:06:52.018179  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:06:52.023424  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:06:52.036424  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:06:52.064244  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:06:52.246651  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:06:52.441476  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:06:52.441527  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:06:52.483989  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:06:52.501544  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:06:52.690125  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:09.204303  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.51413344s)
	I0917 00:07:09.204382  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:09.225679  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:09.253125  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:09.286728  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:09.309012  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:09.445797  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:09.588443  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.726437  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:09.759063  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:09.787528  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:09.918052  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:10.070248  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:10.091720  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:10.091835  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:10.104106  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:10.104210  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:10.109447  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:10.164469  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:10.164546  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.206116  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:10.251181  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:10.252538  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:10.254028  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:10.280282  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:10.286408  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:10.315665  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:10.317007  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:10.317340  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:10.349566  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:10.349878  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0917 00:07:10.349892  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:10.349931  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:10.350083  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:10.350139  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:10.350152  767194 certs.go:256] generating profile certs ...
	I0917 00:07:10.350273  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:10.350356  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.11b60fbb
	I0917 00:07:10.350412  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:10.350424  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:10.350443  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:10.350459  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:10.350474  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:10.350489  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:10.350505  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:10.350519  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:10.350532  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:10.350613  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:10.350656  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:10.350669  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:10.350702  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:10.350734  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:10.350774  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:10.350834  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:10.350874  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:10.350896  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:10.350924  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:10.350992  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:10.376726  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:10.493359  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:10.503886  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:10.534629  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:10.546504  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:10.568315  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:10.575486  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:10.605107  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:10.617021  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:10.651278  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:10.670568  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:10.696371  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:10.704200  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:10.732773  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:10.783862  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:10.831455  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:10.878503  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:10.928036  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:10.987893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:11.056094  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:11.123465  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:11.173229  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:11.218880  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:11.260399  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:11.310489  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:11.343030  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:11.378463  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:11.409826  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:11.456579  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:11.506523  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:11.540827  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:11.586318  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:11.600141  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:11.619035  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.625867  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.626054  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:11.639263  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:11.653785  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:11.672133  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681092  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.681171  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:11.692463  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:11.707982  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:11.728502  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735225  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.735287  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:11.745817  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:11.762496  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:11.768239  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:11.782100  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:11.796792  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:11.807595  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:11.818618  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:11.828824  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:11.839591  767194 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0917 00:07:11.839780  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:11.839824  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:11.839873  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:11.860859  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:11.861012  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:11.861079  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:11.879762  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:11.879865  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:11.896560  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:11.928442  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:11.958532  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:11.988805  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:11.997336  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:12.017582  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.177262  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.199102  767194 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:12.199621  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:12.202718  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:12.204066  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:12.356191  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:12.380335  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:12.380472  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:12.380985  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184442  767194 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0917 00:07:13.184486  767194 node_ready.go:38] duration metric: took 803.457553ms for node "ha-198834-m02" to be "Ready" ...
	I0917 00:07:13.184510  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:13.184576  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:13.200500  767194 api_server.go:72] duration metric: took 1.001333458s to wait for apiserver process to appear ...
	I0917 00:07:13.200532  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:07:13.200555  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:07:13.213606  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:07:13.214727  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:07:13.214764  767194 api_server.go:131] duration metric: took 14.223116ms to wait for apiserver health ...
	I0917 00:07:13.214777  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:07:13.256193  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:07:13.256242  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256252  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.256264  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.256270  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.256275  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.256280  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.256284  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.256289  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.256293  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.256298  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.256303  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.256308  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.256313  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.256318  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.256322  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.256327  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.256333  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.256338  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.256343  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.256347  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.256354  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:07:13.256358  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.256363  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.256369  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.256384  767194 system_pods.go:74] duration metric: took 41.59977ms to wait for pod list to return data ...
	I0917 00:07:13.256395  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:07:13.264291  767194 default_sa.go:45] found service account: "default"
	I0917 00:07:13.264324  767194 default_sa.go:55] duration metric: took 7.92079ms for default service account to be created ...
	I0917 00:07:13.264336  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:07:13.276453  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:07:13.276550  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276578  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:07:13.276615  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:07:13.276644  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:07:13.276660  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:07:13.276676  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:07:13.276691  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:07:13.276720  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:07:13.276746  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:07:13.276763  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:07:13.276778  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:07:13.276793  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:07:13.276822  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:07:13.276857  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:07:13.276872  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:07:13.276885  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:07:13.277012  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:07:13.277120  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:07:13.277142  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:07:13.277175  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:07:13.277203  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:07:13.277208  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:07:13.277217  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:07:13.277225  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:07:13.277236  767194 system_pods.go:126] duration metric: took 12.891282ms to wait for k8s-apps to be running ...
	I0917 00:07:13.277249  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:07:13.277375  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:07:13.297819  767194 system_svc.go:56] duration metric: took 20.558975ms WaitForService to wait for kubelet
	I0917 00:07:13.297852  767194 kubeadm.go:578] duration metric: took 1.098690951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:07:13.297875  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:07:13.307482  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307521  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307539  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307677  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307701  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:07:13.307723  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:07:13.307753  767194 node_conditions.go:105] duration metric: took 9.872298ms to run NodePressure ...
	I0917 00:07:13.307786  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:07:13.307825  767194 start.go:255] writing updated cluster config ...
	I0917 00:07:13.310261  767194 out.go:203] 
	I0917 00:07:13.313110  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:13.313320  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.314968  767194 out.go:179] * Starting "ha-198834-m03" control-plane node in "ha-198834" cluster
	I0917 00:07:13.316602  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:07:13.318003  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:07:13.319806  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:07:13.319840  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:07:13.319992  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:07:13.320012  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:07:13.320251  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.320825  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:07:13.347419  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:07:13.347438  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:07:13.347454  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:07:13.347496  767194 start.go:360] acquireMachinesLock for ha-198834-m03: {Name:mk4dabc098a240f7afab19054f40d0106bd7a469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:07:13.347565  767194 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "ha-198834-m03"
	I0917 00:07:13.347590  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:07:13.347599  767194 fix.go:54] fixHost starting: m03
	I0917 00:07:13.347818  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.369824  767194 fix.go:112] recreateIfNeeded on ha-198834-m03: state=Stopped err=<nil>
	W0917 00:07:13.369863  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:07:13.371420  767194 out.go:252] * Restarting existing docker container for "ha-198834-m03" ...
	I0917 00:07:13.371502  767194 cli_runner.go:164] Run: docker start ha-198834-m03
	I0917 00:07:13.655593  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m03 --format={{.State.Status}}
	I0917 00:07:13.677720  767194 kic.go:430] container "ha-198834-m03" state is running.
	I0917 00:07:13.678397  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:13.698869  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:07:13.699223  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:07:13.699297  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:13.720009  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:13.720402  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:13.720423  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:07:13.721130  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37410->127.0.0.1:32823: read: connection reset by peer
	I0917 00:07:16.888288  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:16.888424  767194 ubuntu.go:182] provisioning hostname "ha-198834-m03"
	I0917 00:07:16.888511  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:16.916245  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:16.916715  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:16.916774  767194 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m03 && echo "ha-198834-m03" | sudo tee /etc/hostname
	I0917 00:07:17.072762  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m03
	
	I0917 00:07:17.072849  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.090683  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.090891  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.090926  767194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:07:17.226615  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.226655  767194 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:07:17.226674  767194 ubuntu.go:190] setting up certificates
	I0917 00:07:17.226686  767194 provision.go:84] configureAuth start
	I0917 00:07:17.226737  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:17.243928  767194 provision.go:143] copyHostCerts
	I0917 00:07:17.243981  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244016  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:07:17.244028  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:07:17.244117  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:07:17.244225  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244251  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:07:17.244261  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:07:17.244308  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:07:17.244380  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244407  767194 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:07:17.244416  767194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:07:17.244453  767194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:07:17.244535  767194 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m03 san=[127.0.0.1 192.168.49.4 ha-198834-m03 localhost minikube]
	I0917 00:07:17.292018  767194 provision.go:177] copyRemoteCerts
	I0917 00:07:17.292080  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:07:17.292117  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.308563  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:17.405828  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:07:17.405893  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:07:17.431262  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:07:17.431334  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:07:17.455746  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:07:17.455816  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:07:17.480475  767194 provision.go:87] duration metric: took 253.772124ms to configureAuth
	I0917 00:07:17.480509  767194 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:07:17.480714  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:17.480758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.497376  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.497580  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.497596  767194 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:07:17.633636  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:07:17.633662  767194 ubuntu.go:71] root file system type: overlay
	I0917 00:07:17.633805  767194 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:07:17.633874  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.651414  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.651681  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.651795  767194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:07:17.804026  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:07:17.804120  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.820842  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:07:17.821111  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0917 00:07:17.821138  767194 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:07:17.969667  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:07:17.969696  767194 machine.go:96] duration metric: took 4.270454946s to provisionDockerMachine
	I0917 00:07:17.969711  767194 start.go:293] postStartSetup for "ha-198834-m03" (driver="docker")
	I0917 00:07:17.969724  767194 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:07:17.969792  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:07:17.969841  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:17.990397  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.094261  767194 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:07:18.098350  767194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:07:18.098388  767194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:07:18.098399  767194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:07:18.098407  767194 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:07:18.098437  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:07:18.098499  767194 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:07:18.098595  767194 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:07:18.098610  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:07:18.098725  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:07:18.109219  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:18.136620  767194 start.go:296] duration metric: took 166.894782ms for postStartSetup
	I0917 00:07:18.136712  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:07:18.136758  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.154707  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.253452  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:07:18.258750  767194 fix.go:56] duration metric: took 4.91114427s for fixHost
	I0917 00:07:18.258774  767194 start.go:83] releasing machines lock for "ha-198834-m03", held for 4.911195885s
	I0917 00:07:18.258832  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m03
	I0917 00:07:18.277160  767194 out.go:179] * Found network options:
	I0917 00:07:18.278351  767194 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:07:18.279348  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279378  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279406  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:07:18.279425  767194 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:07:18.279508  767194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:07:18.279557  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.279572  767194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:07:18.279629  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m03
	I0917 00:07:18.297009  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.297357  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m03/id_rsa Username:docker}
	I0917 00:07:18.461356  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:07:18.481814  767194 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:07:18.481895  767194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:07:18.491087  767194 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:07:18.491123  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.491159  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.491286  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:18.508046  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:07:18.518506  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:07:18.528724  767194 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:07:18.528783  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:07:18.538901  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.548523  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:07:18.558495  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:07:18.568810  767194 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:07:18.578635  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:07:18.588831  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:07:18.599026  767194 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:07:18.608953  767194 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:07:18.617676  767194 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:07:18.629264  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:18.768747  767194 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:07:18.967427  767194 start.go:495] detecting cgroup driver to use...
	I0917 00:07:18.967485  767194 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:07:18.967537  767194 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:07:18.989293  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.005620  767194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:07:19.028890  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:07:19.040741  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:07:19.052468  767194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:07:19.069901  767194 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:07:19.074018  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:07:19.084197  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:07:19.103723  767194 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:07:19.235291  767194 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:07:19.383050  767194 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:07:19.383098  767194 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:07:19.407054  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:07:19.421996  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:19.555630  767194 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:07:50.623187  767194 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.067490574s)
	I0917 00:07:50.623303  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:07:50.641030  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:07:50.658671  767194 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:07:50.689413  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:50.703046  767194 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:07:50.803170  767194 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:07:50.901724  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:50.993561  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:07:51.017479  767194 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:07:51.029545  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:51.119869  767194 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:07:51.204520  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:07:51.216519  767194 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:07:51.216591  767194 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:07:51.220572  767194 start.go:563] Will wait 60s for crictl version
	I0917 00:07:51.220624  767194 ssh_runner.go:195] Run: which crictl
	I0917 00:07:51.224162  767194 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:07:51.260602  767194 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:07:51.260663  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.285759  767194 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:07:51.312885  767194 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:07:51.314109  767194 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:07:51.315183  767194 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:07:51.316372  767194 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:07:51.333621  767194 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:07:51.337646  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:51.349463  767194 mustload.go:65] Loading cluster: ha-198834
	I0917 00:07:51.349718  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:51.350027  767194 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:07:51.366938  767194 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:07:51.367221  767194 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.4
	I0917 00:07:51.367234  767194 certs.go:194] generating shared ca certs ...
	I0917 00:07:51.367257  767194 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:07:51.367403  767194 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:07:51.367473  767194 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:07:51.367486  767194 certs.go:256] generating profile certs ...
	I0917 00:07:51.367595  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:07:51.367661  767194 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.49f70783
	I0917 00:07:51.367716  767194 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:07:51.367732  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:07:51.367752  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:07:51.367770  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:07:51.367789  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:07:51.367807  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:07:51.367832  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:07:51.367852  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:07:51.367869  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:07:51.367977  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:07:51.368020  767194 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:07:51.368035  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:07:51.368076  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:07:51.368123  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:07:51.368156  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:07:51.368219  767194 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:07:51.368269  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:07:51.368293  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:07:51.368313  767194 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:51.368380  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:07:51.385113  767194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:07:51.473207  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:07:51.477858  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:07:51.490558  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:07:51.494138  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:07:51.507164  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:07:51.510845  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:07:51.523649  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:07:51.527311  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:07:51.539889  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:07:51.543488  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:07:51.557348  767194 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:07:51.561022  767194 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:07:51.575140  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:07:51.600746  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:07:51.626754  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:07:51.652660  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:07:51.677825  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:07:51.705137  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:07:51.740575  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:07:51.782394  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:07:51.821612  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:07:51.869185  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:07:51.909129  767194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:07:51.951856  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:07:51.980155  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:07:52.009170  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:07:52.038558  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:07:52.065379  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:07:52.093597  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:07:52.126589  767194 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:07:52.157625  767194 ssh_runner.go:195] Run: openssl version
	I0917 00:07:52.165683  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:07:52.182691  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188710  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.188782  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:07:52.198794  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:07:52.213539  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:07:52.228292  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233558  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.233622  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:07:52.242917  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:07:52.253428  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:07:52.264188  767194 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268190  767194 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.268248  767194 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:07:52.275453  767194 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:07:52.285681  767194 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:07:52.289640  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:07:52.297959  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:07:52.305434  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:07:52.313682  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:07:52.322656  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:07:52.330627  767194 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:07:52.338015  767194 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0917 00:07:52.338141  767194 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:07:52.338171  767194 kube-vip.go:115] generating kube-vip config ...
	I0917 00:07:52.338230  767194 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:07:52.353235  767194 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:07:52.353321  767194 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:07:52.353383  767194 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:07:52.364085  767194 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:07:52.364180  767194 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:07:52.374489  767194 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:07:52.394684  767194 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:07:52.414928  767194 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:07:52.435081  767194 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:07:52.439302  767194 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:07:52.451073  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.596707  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.610374  767194 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:07:52.610770  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:07:52.613091  767194 out.go:179] * Verifying Kubernetes components...
	I0917 00:07:52.614497  767194 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:07:52.748599  767194 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:07:52.767051  767194 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:07:52.767139  767194 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:07:52.767427  767194 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771001  767194 node_ready.go:49] node "ha-198834-m03" is "Ready"
	I0917 00:07:52.771035  767194 node_ready.go:38] duration metric: took 3.579349ms for node "ha-198834-m03" to be "Ready" ...
	I0917 00:07:52.771053  767194 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:07:52.771108  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.272115  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:53.771243  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.271592  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:54.772153  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.272098  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:55.771893  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.271870  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:56.771931  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.271565  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:57.771663  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.272256  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:58.772138  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.272247  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:07:59.772002  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.271313  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:00.771538  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.272173  767194 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:08:01.287212  767194 api_server.go:72] duration metric: took 8.676772616s to wait for apiserver process to appear ...
	I0917 00:08:01.287241  767194 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:08:01.287263  767194 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:08:01.291600  767194 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:08:01.292548  767194 api_server.go:141] control plane version: v1.34.0
	I0917 00:08:01.292573  767194 api_server.go:131] duration metric: took 5.323927ms to wait for apiserver health ...
	I0917 00:08:01.292583  767194 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:08:01.299296  767194 system_pods.go:59] 24 kube-system pods found
	I0917 00:08:01.299329  767194 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.299337  767194 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.299343  767194 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.299349  767194 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.299354  767194 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.299360  767194 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.299374  767194 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.299383  767194 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.299391  767194 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.299396  767194 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.299405  767194 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.299410  767194 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.299417  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.299426  767194 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.299434  767194 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.299440  767194 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299452  767194 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.299462  767194 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.299474  767194 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.299483  767194 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.299488  767194 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.299495  767194 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.299500  767194 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.299507  767194 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.299515  767194 system_pods.go:74] duration metric: took 6.92458ms to wait for pod list to return data ...
	I0917 00:08:01.299527  767194 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:08:01.302268  767194 default_sa.go:45] found service account: "default"
	I0917 00:08:01.302290  767194 default_sa.go:55] duration metric: took 2.753628ms for default service account to be created ...
	I0917 00:08:01.302298  767194 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:08:01.308262  767194 system_pods.go:86] 24 kube-system pods found
	I0917 00:08:01.308290  767194 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running
	I0917 00:08:01.308297  767194 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:08:01.308303  767194 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:08:01.308308  767194 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:08:01.308313  767194 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:08:01.308318  767194 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:08:01.308328  767194 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:08:01.308338  767194 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:08:01.308345  767194 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:08:01.308353  767194 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:08:01.308358  767194 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:08:01.308366  767194 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:08:01.308372  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:08:01.308382  767194 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:08:01.308387  767194 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:08:01.308399  767194 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308406  767194 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:08:01.308416  767194 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:08:01.308422  767194 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:08:01.308430  767194 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:08:01.308437  767194 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running
	I0917 00:08:01.308444  767194 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:08:01.308450  767194 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:08:01.308457  767194 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:08:01.308466  767194 system_pods.go:126] duration metric: took 6.162144ms to wait for k8s-apps to be running ...
	I0917 00:08:01.308477  767194 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:08:01.308531  767194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:08:01.321442  767194 system_svc.go:56] duration metric: took 12.955822ms WaitForService to wait for kubelet
	I0917 00:08:01.321471  767194 kubeadm.go:578] duration metric: took 8.711043606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:08:01.321497  767194 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:08:01.324862  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324889  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324932  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324940  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324955  767194 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:08:01.324965  767194 node_conditions.go:123] node cpu capacity is 8
	I0917 00:08:01.324975  767194 node_conditions.go:105] duration metric: took 3.472737ms to run NodePressure ...
	I0917 00:08:01.324991  767194 start.go:241] waiting for startup goroutines ...
	I0917 00:08:01.325019  767194 start.go:255] writing updated cluster config ...
	I0917 00:08:01.327247  767194 out.go:203] 
	I0917 00:08:01.328726  767194 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:08:01.328814  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.330445  767194 out.go:179] * Starting "ha-198834-m04" worker node in "ha-198834" cluster
	I0917 00:08:01.331747  767194 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:08:01.333143  767194 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:08:01.334280  767194 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:08:01.334304  767194 cache.go:58] Caching tarball of preloaded images
	I0917 00:08:01.334314  767194 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:08:01.334421  767194 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:08:01.334508  767194 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:08:01.334619  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.354767  767194 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:08:01.354793  767194 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:08:01.354813  767194 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:08:01.354846  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:08:01.354978  767194 start.go:364] duration metric: took 110.48µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:08:01.355008  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:08:01.355019  767194 fix.go:54] fixHost starting: m04
	I0917 00:08:01.355235  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.371130  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:08:01.371158  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:08:01.373077  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:08:01.373153  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:08:01.641002  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:08:01.659099  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:08:01.659469  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:08:01.678005  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:08:01.678237  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:08:01.678290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:08:01.696742  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:08:01.697129  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0917 00:08:01.697150  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:08:01.697961  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37464->127.0.0.1:32828: read: connection reset by peer
	I0917 00:08:04.699300  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:07.701796  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:10.702633  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:13.704979  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:16.706261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:19.708223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:22.709325  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:25.709823  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:28.712117  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:31.713282  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:34.713692  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:37.714198  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:40.714526  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:43.715144  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:46.716332  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:49.718233  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:52.719842  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:55.720892  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:08:58.723145  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:01.724306  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:04.725156  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:07.727215  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:10.727548  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:13.729824  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:16.730195  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:19.732187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:22.733240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:25.734470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:28.736754  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:31.737738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:34.738212  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:37.740201  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:40.740629  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:43.742209  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:46.743230  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:49.743812  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:52.745547  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:55.746133  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:09:58.747347  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:01.748104  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:04.749384  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:07.751199  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:10.751605  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:13.754005  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:16.755405  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:19.757166  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:22.759220  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:25.760523  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:28.762825  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:31.764155  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:34.765318  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:37.767696  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:40.768111  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:43.768686  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:46.769636  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:49.771919  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:52.774246  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:55.774600  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:10:58.776146  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0917 00:11:01.777005  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:11:01.777043  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:11:01.777121  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.795827  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.795926  767194 machine.go:96] duration metric: took 3m0.117674387s to provisionDockerMachine
	I0917 00:11:01.796029  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:11:01.796065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.813326  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.813470  767194 retry.go:31] will retry after 152.729446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:01.966929  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:01.985775  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:01.985883  767194 retry.go:31] will retry after 397.218731ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:02.383496  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:02.403581  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:02.403703  767194 retry.go:31] will retry after 638.635672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.042529  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.059560  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.059686  767194 retry.go:31] will retry after 704.769086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.765290  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.783784  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:03.783946  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:03.783981  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:03.784042  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:11:03.784097  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:03.801467  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:03.801578  767194 retry.go:31] will retry after 205.36367ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.008065  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.026061  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.026199  767194 retry.go:31] will retry after 386.510214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.413871  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.432422  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.432542  767194 retry.go:31] will retry after 536.785381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:04.970143  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:04.987140  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:11:04.987259  767194 retry.go:31] will retry after 666.945417ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.654998  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:11:05.677613  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:11:05.677742  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677760  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677774  767194 fix.go:56] duration metric: took 3m4.322754949s for fixHost
	I0917 00:11:05.677787  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m4.322792335s
	W0917 00:11:05.677805  767194 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:11:05.677949  767194 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:11:05.677962  767194 start.go:729] Will try again in 5 seconds ...
	I0917 00:11:10.678811  767194 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:11:10.678978  767194 start.go:364] duration metric: took 125.961µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:11:10.679012  767194 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:11:10.679023  767194 fix.go:54] fixHost starting: m04
	I0917 00:11:10.679331  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.696334  767194 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:11:10.696364  767194 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:11:10.698674  767194 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:11:10.698775  767194 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:11:10.958441  767194 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:11:10.976858  767194 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:11:10.977249  767194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:11:10.996019  767194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:11:10.996308  767194 machine.go:93] provisionDockerMachine start ...
	I0917 00:11:10.996391  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:11:11.014622  767194 main.go:141] libmachine: Using SSH client type: native
	I0917 00:11:11.014851  767194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0917 00:11:11.014862  767194 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:11:11.015528  767194 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40006->127.0.0.1:32833: read: connection reset by peer
	I0917 00:11:14.016664  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:17.018409  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:20.020719  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:23.023197  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:26.024253  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:29.026231  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:32.027234  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:35.028559  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:38.030180  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:41.030858  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:44.031976  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:47.032386  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:50.034183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:53.036585  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:56.037322  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:11:59.039174  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:02.040643  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:05.042141  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:08.044484  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:11.044866  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:14.045168  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:17.046169  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:20.047738  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:23.049217  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:26.050288  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:29.052601  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:32.053185  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:35.054173  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:38.056589  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:41.056901  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:44.057410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:47.058856  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:50.059838  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:53.061223  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:56.061941  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:12:59.064269  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:02.065654  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:05.066720  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:08.069008  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:11.070247  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:14.071588  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:17.073030  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:20.075194  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:23.075889  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:26.077261  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:29.079216  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:32.080240  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:35.080740  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:38.083067  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:41.083410  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:44.084470  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:47.085187  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:50.087373  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:53.089182  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:56.090200  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:13:59.091003  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:02.092270  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:05.093183  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:08.094399  767194 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0917 00:14:11.094584  767194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:11.094618  767194 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:14:11.094699  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.112633  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.112730  767194 machine.go:96] duration metric: took 3m0.1164066s to provisionDockerMachine
	I0917 00:14:11.112808  767194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:11.112848  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.131340  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.131459  767194 retry.go:31] will retry after 217.33373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.349947  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.367764  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.367886  767194 retry.go:31] will retry after 328.999453ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:11.697508  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:11.715227  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:11.715392  767194 retry.go:31] will retry after 827.670309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.544130  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.562142  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:12.562261  767194 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:12.562274  767194 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.562322  767194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:14:12.562353  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.581698  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.581803  767194 retry.go:31] will retry after 257.155823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:12.839282  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:12.856512  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:12.856617  767194 retry.go:31] will retry after 258.093075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.115042  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.133383  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.133525  767194 retry.go:31] will retry after 435.275696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:13.569043  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:13.587245  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	I0917 00:14:13.587350  767194 retry.go:31] will retry after 560.286621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.148585  767194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	W0917 00:14:14.167049  767194 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04 returned with exit code 1
	W0917 00:14:14.167159  767194 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.167179  767194 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.167190  767194 fix.go:56] duration metric: took 3m3.488169176s for fixHost
	I0917 00:14:14.167197  767194 start.go:83] releasing machines lock for "ha-198834-m04", held for 3m3.488205367s
	W0917 00:14:14.167315  767194 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-198834" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:14:14.169966  767194 out.go:203] 
	W0917 00:14:14.171309  767194 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:14:14.171324  767194 out.go:285] * 
	W0917 00:14:14.173015  767194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:14:14.174398  767194 out.go:203] 
	
	
	==> Docker <==
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Setting cgroupDriver systemd"
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 17 00:06:32 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:32Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 17 00:06:32 ha-198834 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-pstjp_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"870758f308362bc20e83047a4adf1621caf84b44c5752280d8fc86e4c48fbcab\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000\""
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b47695e7722ae97363ea22c63f66096a6ecc511747e54aac5f8ef52c2bccc43f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/beb17aaed35c336b100468a8af1e4d5a446acc16a51b6d88c169b26f731e4d18/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a13ea6d24610a4b3fe0f24eb6ae80782a60d62b4d2d9232966b5779cbab4b54/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af005efeb3a09eef7fbb97f4b29e8c0d2980e77ba4c7ceccc514d8de19a0c461/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:33 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2edf887287bbf8068cce63b7faf1f32074cd90688f7befba7a02a4cb8b00d85f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:34 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2\""
	Sep 17 00:06:34 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7b09bb6b1db0bb2224a5805349fc4d6295cace54536b13098ccf873075486000\""
	Sep 17 00:06:39 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/02337f9cf4b1297217a71f717a99d7fd2b400649baf91af0fe3e64f2ae3bf34b/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6d2a2147c23d6db38977d2b195118845bcf0f4b7b50bd65e59156087c8f4a36/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec5acf265466354c265f4a5a6c47300c16e052d876e5b879f13c8cb25513d1df/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd69455479fe49678a69e6c15e7428cf2e0933a67e62ce21b42adc2ddffbbc50/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4da814f488dc35aa80427876bce77b335fc3a2333320170df1e542d7dbf76b68/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:06:40 ha-198834 dockerd[794]: time="2025-09-17T00:06:40.918358490Z" level=info msg="ignoring event" container=c593c83411d202af565aa578ee9c507fe6076579aab28504b4f9fc77eebb5e49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:06:40 ha-198834 cri-dockerd[1127]: time="2025-09-17T00:06:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13e9e86bdcc31c2473895f9f8e326522c316dee735315cefa4058543e1714435/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:06:52 ha-198834 dockerd[794]: time="2025-09-17T00:06:52.494377563Z" level=info msg="ignoring event" container=1625a23fd7f91dfa311956f9315bcae7fdde0540127a12f56cf5429b147e1f07 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	64ab62b23e778       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       2                   a6d2a2147c23d       storage-provisioner
	ab70c5e50e54c       765655ea60781                                                                                         7 minutes ago       Running             kube-vip                  1                   2edf887287bbf       kube-vip-ha-198834
	bdc52003487f9       409467f978b4a                                                                                         7 minutes ago       Running             kindnet-cni               1                   13e9e86bdcc31       kindnet-h28vp
	c593c83411d20       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       1                   a6d2a2147c23d       storage-provisioner
	d130ec085d5ce       8c811b4aec35f                                                                                         7 minutes ago       Running             busybox                   1                   4da814f488dc3       busybox-7b57f96db7-pstjp
	19c8584dae1b9       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   3                   fd69455479fe4       coredns-66bc5c9577-5wx4k
	21dff06737d90       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                1                   02337f9cf4b12       kube-proxy-5tkhn
	8a501078c4170       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   3                   ec5acf2654663       coredns-66bc5c9577-mjbz6
	9f5475377594b       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            1                   b47695e7722ae       kube-apiserver-ha-198834
	1625a23fd7f91       765655ea60781                                                                                         7 minutes ago       Exited              kube-vip                  0                   2edf887287bbf       kube-vip-ha-198834
	e5f91b76238c9       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   1                   af005efeb3a09       kube-controller-manager-ha-198834
	371ff065d1dfd       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            1                   7a13ea6d24610       kube-scheduler-ha-198834
	7b047b1099553       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      1                   beb17aaed35c3       etcd-ha-198834
	43ce744921507       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   2ab50e090d466       busybox-7b57f96db7-pstjp
	f4f7ea59034e3       52546a367cc9e                                                                                         16 minutes ago      Exited              coredns                   2                   7b09bb6b1db0b       coredns-66bc5c9577-mjbz6
	9a9eb43950f05       52546a367cc9e                                                                                         16 minutes ago      Exited              coredns                   2                   bfd4ac8a61c79       coredns-66bc5c9577-5wx4k
	470c5aeb0143c       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              16 minutes ago      Exited              kindnet-cni               0                   f541f878be896       kindnet-h28vp
	2da683f529549       df0860106674d                                                                                         17 minutes ago      Exited              kube-proxy                0                   b04f554fbbf03       kube-proxy-5tkhn
	4f536df8f44eb       a0af72f2ec6d6                                                                                         17 minutes ago      Exited              kube-controller-manager   0                   3f97e150fa11b       kube-controller-manager-ha-198834
	ea129c2b5408a       90550c43ad2bc                                                                                         17 minutes ago      Exited              kube-apiserver            0                   364803df34eb0       kube-apiserver-ha-198834
	69601afa8d5b0       5f1f5298c888d                                                                                         17 minutes ago      Exited              etcd                      0                   d6bbb58cc14ca       etcd-ha-198834
	82a99d0c7744a       46169d968e920                                                                                         17 minutes ago      Exited              kube-scheduler            0                   7ffde546949d7       kube-scheduler-ha-198834
	
	
	==> coredns [19c8584dae1b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53538 - 29295 "HINFO IN 9023489977302481875.6206531949632663336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037239604s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [8a501078c417] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35492 - 21170 "HINFO IN 5429275037699935078.1019057475364754304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034969536s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9a9eb43950f0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48462 - 46874 "HINFO IN 5273252588524494281.7165436024008789767. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039842483s
	[INFO] 10.244.1.2:57104 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.003506199s
	[INFO] 10.244.1.2:35085 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.022178595s
	[INFO] 10.244.0.4:51301 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001619426s
	[INFO] 10.244.1.2:53849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259853s
	[INFO] 10.244.1.2:45188 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162256s
	[INFO] 10.244.1.2:47534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012152721s
	[INFO] 10.244.1.2:52406 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144842s
	[INFO] 10.244.0.4:34463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.015354953s
	[INFO] 10.244.0.4:44729 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000186473s
	[INFO] 10.244.0.4:49846 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119774s
	[INFO] 10.244.0.4:48015 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170848s
	[INFO] 10.244.1.2:54294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022712s
	[INFO] 10.244.1.2:41177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176943s
	[INFO] 10.244.1.2:35431 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141918s
	[INFO] 10.244.0.4:42357 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000222475s
	[INFO] 10.244.0.4:38639 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076388s
	[INFO] 10.244.1.2:48245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137114s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4f7ea59034e] <==
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55965 - 29875 "HINFO IN 6625775143588404920.46653605595863863. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.021009667s
	[INFO] 10.244.1.2:48391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347519s
	[INFO] 10.244.1.2:52968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.006268731s
	[INFO] 10.244.1.2:37064 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.01625905s
	[INFO] 10.244.0.4:38724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015051s
	[INFO] 10.244.0.4:54867 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000812822s
	[INFO] 10.244.0.4:36556 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000823234s
	[INFO] 10.244.0.4:49673 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000092222s
	[INFO] 10.244.1.2:54588 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010881123s
	[INFO] 10.244.1.2:37311 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184317s
	[INFO] 10.244.1.2:34776 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214951s
	[INFO] 10.244.1.2:60592 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000142928s
	[INFO] 10.244.0.4:49014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014497s
	[INFO] 10.244.0.4:49266 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.004264282s
	[INFO] 10.244.0.4:42048 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000128777s
	[INFO] 10.244.0.4:37542 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071541s
	[INFO] 10.244.1.2:43417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165726s
	[INFO] 10.244.0.4:54211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155228s
	[INFO] 10.244.0.4:54131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161968s
	[INFO] 10.244.1.2:34766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190106s
	[INFO] 10.244.1.2:35363 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179724s
	[INFO] 10.244.1.2:39508 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110509s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:14:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:11:44 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f3c2828aef94f11bd80d984a3eb304b
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m43s                  kube-proxy       
	  Normal  Starting                 17m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    17m                    kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                    kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  Starting                 17m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                    kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           17m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           8m56s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  NodeHasSufficientMemory  7m56s (x8 over 7m56s)  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    7m56s (x8 over 7m56s)  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m56s (x7 over 7m56s)  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m47s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           7m9s                   node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m56s                  node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:14:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:08:25 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:08:25 +0000   Wed, 17 Sep 2025 00:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d9336414c044e558d42395caacb8496
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 16m                    kube-proxy       
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m56s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  Starting                 7m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m54s (x8 over 7m54s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m54s (x8 over 7m54s)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m54s (x7 over 7m54s)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m47s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           7m9s                   node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           6m56s                  node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [69601afa8d5b] <==
	{"level":"warn","ts":"2025-09-17T00:06:21.816182Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816245Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:06:21.816263Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816182Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:06:21.816283Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:06:21.816292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:06:21.816232Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:06:21.816324Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816342Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816364Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816409Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816435Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816472Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816689Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:06:21.816711Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816726Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816752Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.816950Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817063Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817099Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.817120Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:06:21.819127Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:06:21.819183Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:06:21.819210Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:06:21.819240Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-198834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7b047b109955] <==
	{"level":"info","ts":"2025-09-17T00:07:58.906971Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.923403Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:07:58.926618Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.247606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:14:20.255084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:14:20.263476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:14:20.273255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39384","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:14:20.282657Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(5981864578030751937 12593026477526642892)"}
	{"level":"info","ts":"2025-09-17T00:14:20.284010Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b3d041dbb5a11c89","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-17T00:14:20.284049Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284086Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284124Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284164Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284185Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284211Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284417Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"context canceled"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284501Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b3d041dbb5a11c89","error":"failed to read b3d041dbb5a11c89 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-17T00:14:20.284527Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.284665Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:14:20.284696Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284724Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284740Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"b3d041dbb5a11c89"}
	{"level":"info","ts":"2025-09-17T00:14:20.284770Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.294982Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"b3d041dbb5a11c89"}
	{"level":"warn","ts":"2025-09-17T00:14:20.296687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:38978","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:14:29 up  2:56,  0 users,  load average: 1.01, 1.49, 1.55
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [470c5aeb0143] <==
	I0917 00:05:30.418641       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.418896       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:40.419001       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:05:40.419203       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:40.419213       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:40.419325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:40.419337       1 main.go:301] handling current node
	I0917 00:05:50.419127       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:05:50.419157       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:05:50.419382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:50.419397       1 main.go:301] handling current node
	I0917 00:05:50.419409       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:05:50.419413       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:00.422562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:00.422596       1 main.go:301] handling current node
	I0917 00:06:00.422611       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:06:00.422616       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:00.422807       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:06:00.422815       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:06:10.425320       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:10.425358       1 main.go:301] handling current node
	I0917 00:06:10.425375       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:06:10.425381       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:06:10.425598       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:06:10.425613       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [bdc52003487f] <==
	I0917 00:13:41.571105       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:13:41.571718       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:41.571924       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:13:51.563112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:13:51.563147       1 main.go:301] handling current node
	I0917 00:13:51.563166       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:51.563171       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:13:51.563440       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:51.563450       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:01.562311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:01.562353       1 main.go:301] handling current node
	I0917 00:14:01.562369       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:01.562373       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:01.562589       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:01.562603       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:11.571668       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:11.571702       1 main.go:301] handling current node
	I0917 00:14:11.571718       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:11.571723       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:11.571936       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:11.571959       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:21.562648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:21.562681       1 main.go:301] handling current node
	I0917 00:14:21.562695       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:21.562699       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9f5475377594] <==
	E0917 00:07:13.174810       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174819       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174828       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174837       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.174846       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.175006       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.175025       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-09-17T00:07:13.177063Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0007dd680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-09-17T00:07:13.177068Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00115f680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	E0917 00:07:13.177483       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:07:13.177936       1 watcher.go:335] watch chan error: etcdserver: no leader
	I0917 00:07:14.364054       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	W0917 00:07:43.229272       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0917 00:07:46.104655       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:08:03.309841       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:14.885894       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:09:27.376078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:26.628008       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:10:30.857365       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:39.501415       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:56.232261       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:57.532285       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:02.515292       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:58.658174       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:20.016953       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [ea129c2b5408] <==
	I0917 00:06:21.818285       1 secure_serving.go:259] Stopped listening on [::]:8443
	I0917 00:06:21.818307       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:06:21.818343       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:06:21.818212       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 00:06:21.818343       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:06:21.818354       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0917 00:06:21.818404       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	E0917 00:06:21.818445       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:06:21.819652       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.966573ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-hxygcsz4tng6hmluvaoa4vlmha" result=null
	W0917 00:06:22.805712       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:06:22.862569       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-09-17T00:06:22.868061Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.868163       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.868276Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125a960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.869370Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0017f8960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	I0917 00:06:22.869404       1 cidrallocator.go:210] stopping ServiceCIDR Allocator Controller
	{"level":"warn","ts":"2025-09-17T00:06:22.869490Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00125b680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.869797Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0018fc5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.870313Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ce1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.870382       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.870475Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001ce1860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.871365Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ed2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
	I0917 00:06:22.871420       1 stats.go:136] "Error getting keys" err="context canceled"
	{"level":"warn","ts":"2025-09-17T00:06:22.871506Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0021ed2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-09-17T00:06:22.875069Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002a014a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	
	
	==> kube-controller-manager [4f536df8f44e] <==
	I0916 23:57:23.340737       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0916 23:57:23.340876       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:57:23.341125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0916 23:57:23.341625       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0916 23:57:23.341694       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0916 23:57:23.342559       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0916 23:57:23.344828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:57:23.344975       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:57:23.345054       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:57:23.345095       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:57:23.345107       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:57:23.345114       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:57:23.346125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:23.351186       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834" podCIDRs=["10.244.0.0/24"]
	I0916 23:57:23.356557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:57:23.360087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:57:53.917484       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m02\" does not exist"
	I0916 23:57:53.927329       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m02" podCIDRs=["10.244.1.0/24"]
	I0916 23:57:58.295579       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	E0916 23:58:24.690329       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0916 23:58:24.703047       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-89jfn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-89jfn\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 23:58:25.387067       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198834-m03\" does not exist"
	I0916 23:58:25.397154       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198834-m03" podCIDRs=["10.244.2.0/24"]
	I0916 23:58:28.308323       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	E0917 00:01:35.727697       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [e5f91b76238c] <==
	I0917 00:06:42.688133       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:06:42.688192       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:06:42.688272       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:06:42.688535       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:06:42.688667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:06:42.689165       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	I0917 00:06:42.689227       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834"
	I0917 00:06:42.689234       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:06:42.689307       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	I0917 00:06:42.689381       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:06:42.689800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:06:42.690667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:06:42.694964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:06:42.699163       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:06:42.700692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:06:42.713986       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:06:42.717269       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:06:42.722438       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:06:42.724798       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:06:42.752877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:14:22.716991       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717039       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717048       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717085       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717092       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	
	
	==> kube-proxy [21dff06737d9] <==
	I0917 00:06:40.905839       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:06:40.968196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 00:06:44.060317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-198834&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0917 00:06:45.568444       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:06:45.568482       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:06:45.568583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:06:45.590735       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:06:45.590782       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:06:45.596121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:06:45.596463       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:06:45.596508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:45.597774       1 config.go:200] "Starting service config controller"
	I0917 00:06:45.597791       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:06:45.597883       1 config.go:309] "Starting node config controller"
	I0917 00:06:45.597987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:06:45.598035       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:06:45.598042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:06:45.598039       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:06:45.598057       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:06:45.698355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:06:45.698442       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:06:45.698447       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:06:45.698470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [2da683f52954] <==
	I0916 23:57:24.932824       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:57:25.001436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:57:25.102414       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:57:25.102449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:57:25.102563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:57:25.131540       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:57:25.131604       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:57:25.138482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:57:25.139006       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:57:25.139079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:57:25.143232       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:57:25.143254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:57:25.143282       1 config.go:200] "Starting service config controller"
	I0916 23:57:25.143288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:57:25.143298       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:57:25.143304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:57:25.144514       1 config.go:309] "Starting node config controller"
	I0916 23:57:25.144540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:57:25.144548       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:57:25.243772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:57:25.243822       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [371ff065d1df] <==
	I0917 00:06:34.304210       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:06:39.358570       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 00:06:39.358610       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 00:06:39.358624       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:06:39.358634       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:06:39.390353       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:06:39.390375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:39.392538       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392576       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:06:39.392961       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:06:39.493239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [82a99d0c7744] <==
	E0916 23:58:30.048751       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lpn5v\": pod kindnet-lpn5v is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-lpn5v"
	I0916 23:58:30.051563       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lpn5v" node="ha-198834-m03"
	E0916 23:58:32.033373       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:32.033442       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b547a423-84fb-45ae-be85-ebd5ae31cede(kube-system/kindnet-wklkh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	E0916 23:58:32.033468       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wklkh\": pod kindnet-wklkh is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-wklkh"
	I0916 23:58:32.034562       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wklkh" node="ha-198834-m03"
	E0916 23:58:34.059741       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	E0916 23:58:34.059840       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ceef8152-3e11-4bf0-99dc-43470c027544(kube-system/kindnet-cdptd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.059869       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cdptd\": pod kindnet-cdptd is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-cdptd"
	E0916 23:58:34.060293       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	E0916 23:58:34.060658       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod c8813f5f-dfaf-4be1-a0ba-e444bcb2e943(kube-system/kindnet-8t8pb) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	E0916 23:58:34.061375       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8t8pb\": pod kindnet-8t8pb is already assigned to node \"ha-198834-m03\"" logger="UnhandledError" pod="kube-system/kindnet-8t8pb"
	I0916 23:58:34.061557       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cdptd" node="ha-198834-m03"
	I0916 23:58:34.062640       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8t8pb" node="ha-198834-m03"
	I0917 00:01:35.538693       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="ecad6988-1efb-4cc6-8920-902b41d3f3ed" pod="default/busybox-7b57f96db7-kg4q6" assumedNode="ha-198834-m02" currentNode="ha-198834-m03"
	E0917 00:01:35.544474       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m03"
	E0917 00:01:35.546366       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ecad6988-1efb-4cc6-8920-902b41d3f3ed(default/busybox-7b57f96db7-kg4q6) was assumed on ha-198834-m03 but assigned to ha-198834-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	E0917 00:01:35.546583       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-kg4q6\": pod busybox-7b57f96db7-kg4q6 is already assigned to node \"ha-198834-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-kg4q6"
	I0917 00:01:35.548055       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-kg4q6" node="ha-198834-m02"
	I0917 00:06:14.797858       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:06:14.797982       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:14.797862       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:06:14.798018       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:06:14.798047       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:06:14.798073       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:12:23 ha-198834 kubelet[1349]: E0917 00:12:23.472193    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:33 ha-198834 kubelet[1349]: E0917 00:12:33.476036    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:33 ha-198834 kubelet[1349]: E0917 00:12:33.476130    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:43 ha-198834 kubelet[1349]: E0917 00:12:43.482413    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:43 ha-198834 kubelet[1349]: E0917 00:12:43.482518    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:12:53 ha-198834 kubelet[1349]: E0917 00:12:53.487015    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:12:53 ha-198834 kubelet[1349]: E0917 00:12:53.487127    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315467 maxSize=10485760
	Sep 17 00:13:03 ha-198834 kubelet[1349]: E0917 00:13:03.492319    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:03 ha-198834 kubelet[1349]: E0917 00:13:03.492420    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:13 ha-198834 kubelet[1349]: E0917 00:13:13.496175    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:13 ha-198834 kubelet[1349]: E0917 00:13:13.496282    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:23 ha-198834 kubelet[1349]: E0917 00:13:23.501136    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:23 ha-198834 kubelet[1349]: E0917 00:13:23.501231    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:33 ha-198834 kubelet[1349]: E0917 00:13:33.507713    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:33 ha-198834 kubelet[1349]: E0917 00:13:33.507829    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:43 ha-198834 kubelet[1349]: E0917 00:13:43.509754    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:43 ha-198834 kubelet[1349]: E0917 00:13:43.509855    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:13:53 ha-198834 kubelet[1349]: E0917 00:13:53.513005    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:13:53 ha-198834 kubelet[1349]: E0917 00:13:53.513112    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315796 maxSize=10485760
	Sep 17 00:14:03 ha-198834 kubelet[1349]: E0917 00:14:03.518517    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:03 ha-198834 kubelet[1349]: E0917 00:14:03.518636    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315961 maxSize=10485760
	Sep 17 00:14:13 ha-198834 kubelet[1349]: E0917 00:14:13.521966    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:13 ha-198834 kubelet[1349]: E0917 00:14:13.522077    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42315961 maxSize=10485760
	Sep 17 00:14:23 ha-198834 kubelet[1349]: E0917 00:14:23.527352    1349 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7"
	Sep 17 00:14:23 ha-198834 kubelet[1349]: E0917 00:14:23.527458    1349 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log\": failed to reopen container log \"9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="9f5475377594b895906a2a156d1a4d37778cce4d679a4ff2de4bac2a0920c1e7" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/1.log" currentSize=42316126 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-xfzdd
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-198834 describe pod busybox-7b57f96db7-xfzdd
helpers_test.go:290: (dbg) kubectl --context ha-198834 describe pod busybox-7b57f96db7-xfzdd:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-xfzdd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z55j5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-z55j5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  13s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (728.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0917 00:15:36.250104  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:16:13.667997  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:17:36.734078  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:20:36.250096  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:21:13.667282  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:36.250123  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:26:13.666819  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: signal: killed (12m6.062854758s)

                                                
                                                
-- stdout --
	* [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...
	* Enabled addons: 
	
	* Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-198834-m04" worker node in "ha-198834" cluster
	* Pulling base image v0.0.48 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:14:51.984510  791016 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:14:51.984623  791016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:51.984632  791016 out.go:374] Setting ErrFile to fd 2...
	I0917 00:14:51.984636  791016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:51.984851  791016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:14:51.985333  791016 out.go:368] Setting JSON to false
	I0917 00:14:51.986252  791016 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10624,"bootTime":1758057468,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:14:51.986348  791016 start.go:140] virtualization: kvm guest
	I0917 00:14:51.988580  791016 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:14:51.990067  791016 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:14:51.990084  791016 notify.go:220] Checking for updates...
	I0917 00:14:51.993043  791016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:14:51.997525  791016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:14:51.998766  791016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:14:52.000046  791016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:14:52.001295  791016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:14:52.003279  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:14:52.004009  791016 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:14:52.027043  791016 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:14:52.027118  791016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:14:52.079584  791016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:14:52.069593968 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:14:52.079701  791016 docker.go:318] overlay module found
	I0917 00:14:52.081639  791016 out.go:179] * Using the docker driver based on existing profile
	I0917 00:14:52.082816  791016 start.go:304] selected driver: docker
	I0917 00:14:52.082830  791016 start.go:918] validating driver "docker" against &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fa
lse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:14:52.083014  791016 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:14:52.083096  791016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:14:52.133926  791016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:14:52.125055044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:14:52.134728  791016 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:14:52.134759  791016 cni.go:84] Creating CNI manager for ""
	I0917 00:14:52.134818  791016 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:14:52.134880  791016 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvid
ia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:14:52.136778  791016 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0917 00:14:52.138068  791016 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:14:52.139374  791016 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:14:52.140532  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:14:52.140567  791016 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:14:52.140577  791016 cache.go:58] Caching tarball of preloaded images
	I0917 00:14:52.140634  791016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:14:52.140682  791016 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:14:52.140695  791016 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:14:52.140810  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:14:52.161634  791016 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:14:52.161656  791016 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:14:52.161670  791016 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:14:52.161699  791016 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:14:52.161757  791016 start.go:364] duration metric: took 40.027µs to acquireMachinesLock for "ha-198834"
	I0917 00:14:52.161775  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:14:52.161780  791016 fix.go:54] fixHost starting: 
	I0917 00:14:52.162001  791016 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:14:52.178501  791016 fix.go:112] recreateIfNeeded on ha-198834: state=Stopped err=<nil>
	W0917 00:14:52.178530  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:14:52.180503  791016 out.go:252] * Restarting existing docker container for "ha-198834" ...
	I0917 00:14:52.180565  791016 cli_runner.go:164] Run: docker start ha-198834
	I0917 00:14:52.416033  791016 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:14:52.434074  791016 kic.go:430] container "ha-198834" state is running.
	I0917 00:14:52.434534  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:14:52.452524  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:14:52.452733  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:14:52.452794  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:52.469982  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:52.470316  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:52.470337  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:14:52.470946  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43040->127.0.0.1:32838: read: connection reset by peer
	I0917 00:14:55.608979  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:14:55.609013  791016 ubuntu.go:182] provisioning hostname "ha-198834"
	I0917 00:14:55.609067  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:55.625994  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:55.626247  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:55.626266  791016 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0917 00:14:55.773773  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:14:55.773838  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:55.791030  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:55.791293  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:55.791317  791016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:14:55.926499  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:55.926537  791016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:14:55.926565  791016 ubuntu.go:190] setting up certificates
	I0917 00:14:55.926576  791016 provision.go:84] configureAuth start
	I0917 00:14:55.926624  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:14:55.943807  791016 provision.go:143] copyHostCerts
	I0917 00:14:55.943858  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:14:55.943889  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:14:55.943917  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:14:55.944003  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:14:55.944132  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:14:55.944164  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:14:55.944172  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:14:55.944218  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:14:55.944358  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:14:55.944389  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:14:55.944398  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:14:55.944447  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:14:55.944523  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0917 00:14:55.998211  791016 provision.go:177] copyRemoteCerts
	I0917 00:14:55.998274  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:14:55.998311  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.015389  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.112506  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:14:56.112586  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:14:56.137125  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:14:56.137200  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:14:56.161381  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:14:56.161451  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:14:56.186672  791016 provision.go:87] duration metric: took 260.078934ms to configureAuth
	I0917 00:14:56.186707  791016 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:14:56.186953  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:14:56.187011  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.204384  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:56.204678  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:56.204693  791016 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:14:56.340595  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:14:56.340617  791016 ubuntu.go:71] root file system type: overlay
	I0917 00:14:56.340751  791016 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:14:56.340818  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.357831  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:56.358082  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:56.358152  791016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:14:56.507578  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:14:56.507700  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.524869  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:56.525110  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:56.525130  791016 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:14:56.666364  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:56.666388  791016 machine.go:96] duration metric: took 4.213639628s to provisionDockerMachine
	I0917 00:14:56.666402  791016 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0917 00:14:56.666415  791016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:14:56.666485  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:14:56.666538  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.684227  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.780818  791016 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:14:56.784289  791016 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:14:56.784338  791016 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:14:56.784346  791016 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:14:56.784353  791016 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:14:56.784368  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:14:56.784415  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:14:56.784504  791016 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:14:56.784521  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:14:56.784604  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:14:56.793777  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:14:56.818622  791016 start.go:296] duration metric: took 152.204271ms for postStartSetup
	I0917 00:14:56.818715  791016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:56.818756  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.835972  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.927988  791016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:14:56.932365  791016 fix.go:56] duration metric: took 4.770576567s for fixHost
	I0917 00:14:56.932397  791016 start.go:83] releasing machines lock for "ha-198834", held for 4.770626502s
	I0917 00:14:56.932466  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:14:56.949411  791016 ssh_runner.go:195] Run: cat /version.json
	I0917 00:14:56.949449  791016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:14:56.949462  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.949546  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.967027  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.968031  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:57.127150  791016 ssh_runner.go:195] Run: systemctl --version
	I0917 00:14:57.132073  791016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:14:57.136597  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:14:57.155743  791016 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:14:57.155819  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:14:57.165195  791016 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:14:57.165233  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:14:57.165266  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:14:57.165384  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:14:57.182476  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:14:57.192824  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:14:57.203668  791016 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:14:57.203735  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:14:57.214504  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:14:57.225550  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:14:57.235825  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:14:57.246476  791016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:14:57.256782  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:14:57.267321  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:14:57.277778  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:14:57.288377  791016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:14:57.297185  791016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:14:57.305932  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:57.376667  791016 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:14:57.454312  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:14:57.454356  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:14:57.454399  791016 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:14:57.467289  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:14:57.478703  791016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:14:57.494132  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:14:57.505313  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:14:57.517213  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:14:57.534191  791016 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:14:57.537619  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:14:57.546337  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:14:57.564871  791016 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:14:57.632307  791016 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:14:57.699840  791016 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:14:57.699996  791016 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:14:57.718319  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:14:57.729368  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:57.799687  791016 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:14:58.627975  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:14:58.639659  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:14:58.651672  791016 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:14:58.663882  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:14:58.675231  791016 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:14:58.744466  791016 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:14:58.809772  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:58.874459  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:14:58.898119  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:14:58.909139  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:58.974481  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:14:59.053663  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:14:59.065692  791016 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:14:59.065760  791016 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:14:59.069878  791016 start.go:563] Will wait 60s for crictl version
	I0917 00:14:59.069957  791016 ssh_runner.go:195] Run: which crictl
	I0917 00:14:59.073583  791016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:14:59.107316  791016 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:14:59.107388  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:14:59.132627  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:14:59.159893  791016 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:14:59.159983  791016 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:14:59.175796  791016 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:14:59.179723  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:14:59.192006  791016 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:14:59.192142  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:14:59.192197  791016 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:14:59.213503  791016 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:14:59.213523  791016 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:14:59.213573  791016 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:14:59.235481  791016 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:14:59.235506  791016 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:14:59.235519  791016 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0917 00:14:59.235645  791016 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:14:59.235709  791016 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:14:59.287490  791016 cni.go:84] Creating CNI manager for ""
	I0917 00:14:59.287511  791016 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:14:59.287530  791016 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:14:59.287550  791016 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:14:59.287669  791016 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:14:59.287686  791016 kube-vip.go:115] generating kube-vip config ...
	I0917 00:14:59.287725  791016 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:14:59.300724  791016 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:14:59.300820  791016 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:14:59.300869  791016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:14:59.310131  791016 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:14:59.310206  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:14:59.319198  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0917 00:14:59.338210  791016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:14:59.356160  791016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0917 00:14:59.374135  791016 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:14:59.391922  791016 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:14:59.395394  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:14:59.406380  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:59.474578  791016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:14:59.496142  791016 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0917 00:14:59.496164  791016 certs.go:194] generating shared ca certs ...
	I0917 00:14:59.496187  791016 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:14:59.496351  791016 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:14:59.496407  791016 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:14:59.496421  791016 certs.go:256] generating profile certs ...
	I0917 00:14:59.496539  791016 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:14:59.496580  791016 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082
	I0917 00:14:59.496599  791016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:14:59.782546  791016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082 ...
	I0917 00:14:59.782585  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082: {Name:mkd77e113eef8cc978e41c42a33e5d17dfff4d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:14:59.782773  791016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082 ...
	I0917 00:14:59.782792  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082: {Name:mk3187b32897fcdd1c8f3a813b2dbb432ec29d5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:14:59.782949  791016 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0917 00:14:59.783161  791016 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0917 00:14:59.783350  791016 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:14:59.783370  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:14:59.783385  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:14:59.783400  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:14:59.783419  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:14:59.783473  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:14:59.783497  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:14:59.783515  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:14:59.783532  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:14:59.783595  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:14:59.783637  791016 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:14:59.783650  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:14:59.783685  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:14:59.783712  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:14:59.783746  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:14:59.783801  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:14:59.783841  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:14:59.783863  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:14:59.783880  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:14:59.784428  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:14:59.812725  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:14:59.839361  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:14:59.863852  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:14:59.888859  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:14:59.912896  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:14:59.937052  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:14:59.961552  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:14:59.985795  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:15:00.010010  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:15:00.039777  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:15:00.073277  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:15:00.098053  791016 ssh_runner.go:195] Run: openssl version
	I0917 00:15:00.106012  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:15:00.121822  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:00.127559  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:00.127633  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:00.136603  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:15:00.147672  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:15:00.163310  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:15:00.170581  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:15:00.170659  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:15:00.181863  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:15:00.195819  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:15:00.211427  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:15:00.217353  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:15:00.217422  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:15:00.227363  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:15:00.241884  791016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:15:00.247297  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:15:00.255471  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:15:00.262948  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:15:00.271239  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:15:00.279755  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:15:00.288625  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:15:00.298707  791016 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:15:00.298959  791016 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:15:00.331583  791016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:15:00.344226  791016 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:15:00.344251  791016 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:15:00.344297  791016 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:15:00.356225  791016 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:15:00.356954  791016 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-198834" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:15:00.357167  791016 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-661878/kubeconfig needs updating (will repair): [kubeconfig missing "ha-198834" cluster setting kubeconfig missing "ha-198834" context setting]
	I0917 00:15:00.357537  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:00.358238  791016 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:15:00.358794  791016 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:15:00.358816  791016 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:15:00.358822  791016 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:15:00.358827  791016 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:15:00.358832  791016 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:15:00.358831  791016 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:15:00.359348  791016 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:15:00.374216  791016 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:15:00.374315  791016 kubeadm.go:593] duration metric: took 30.054163ms to restartPrimaryControlPlane
	I0917 00:15:00.374348  791016 kubeadm.go:394] duration metric: took 75.647083ms to StartCluster
	I0917 00:15:00.374405  791016 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:00.374563  791016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:15:00.375597  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:00.376534  791016 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:15:00.376617  791016 start.go:241] waiting for startup goroutines ...
	I0917 00:15:00.376603  791016 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:15:00.376897  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:00.379385  791016 out.go:179] * Enabled addons: 
	I0917 00:15:00.381029  791016 addons.go:514] duration metric: took 4.421986ms for enable addons: enabled=[]
	I0917 00:15:00.381072  791016 start.go:246] waiting for cluster config update ...
	I0917 00:15:00.381084  791016 start.go:255] writing updated cluster config ...
	I0917 00:15:00.382978  791016 out.go:203] 
	I0917 00:15:00.384821  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:00.384920  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:00.386510  791016 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0917 00:15:00.388511  791016 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:15:00.389947  791016 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:15:00.391240  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:15:00.391267  791016 cache.go:58] Caching tarball of preloaded images
	I0917 00:15:00.391331  791016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:15:00.391366  791016 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:15:00.391376  791016 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:15:00.391505  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:00.417091  791016 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:15:00.417116  791016 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:15:00.417137  791016 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:15:00.417168  791016 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:15:00.417240  791016 start.go:364] duration metric: took 48.032µs to acquireMachinesLock for "ha-198834-m02"
	I0917 00:15:00.417265  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:15:00.417271  791016 fix.go:54] fixHost starting: m02
	I0917 00:15:00.417572  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:15:00.441100  791016 fix.go:112] recreateIfNeeded on ha-198834-m02: state=Stopped err=<nil>
	W0917 00:15:00.441136  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:15:00.443247  791016 out.go:252] * Restarting existing docker container for "ha-198834-m02" ...
	I0917 00:15:00.443344  791016 cli_runner.go:164] Run: docker start ha-198834-m02
	I0917 00:15:00.796646  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:15:00.817426  791016 kic.go:430] container "ha-198834-m02" state is running.
	I0917 00:15:00.817876  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:15:00.836189  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:00.836456  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:15:00.836532  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:00.857658  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:00.857866  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:00.857878  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:15:00.858657  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54150->127.0.0.1:32843: read: connection reset by peer
	I0917 00:15:04.012250  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:15:04.012285  791016 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0917 00:15:04.012348  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:04.035386  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:04.035689  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:04.035711  791016 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0917 00:15:04.199294  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:15:04.199371  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:04.217053  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:04.217272  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:04.217296  791016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:15:04.358579  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:15:04.358614  791016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:15:04.358640  791016 ubuntu.go:190] setting up certificates
	I0917 00:15:04.358657  791016 provision.go:84] configureAuth start
	I0917 00:15:04.358716  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:15:04.385667  791016 provision.go:143] copyHostCerts
	I0917 00:15:04.385715  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:15:04.385758  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:15:04.385770  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:15:04.385862  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:15:04.385993  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:15:04.386033  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:15:04.386044  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:15:04.386087  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:15:04.386159  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:15:04.386195  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:15:04.386205  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:15:04.386246  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:15:04.386320  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0917 00:15:04.984661  791016 provision.go:177] copyRemoteCerts
	I0917 00:15:04.984745  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:15:04.984793  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.012125  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:05.136809  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:15:05.136881  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:15:05.176029  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:15:05.176109  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:15:05.218675  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:15:05.218751  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:15:05.261970  791016 provision.go:87] duration metric: took 903.291872ms to configureAuth
	I0917 00:15:05.262072  791016 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:15:05.262363  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:05.262473  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.286994  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:05.287695  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:05.287713  791016 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:15:05.457610  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:15:05.457635  791016 ubuntu.go:71] root file system type: overlay
	I0917 00:15:05.457785  791016 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:15:05.457847  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.485000  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:05.485376  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:05.485488  791016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:15:05.653320  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:15:05.653408  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.675057  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:05.675286  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:05.675303  791016 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:15:05.827491  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:15:05.827539  791016 machine.go:96] duration metric: took 4.991067416s to provisionDockerMachine
	I0917 00:15:05.827554  791016 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0917 00:15:05.827568  791016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:15:05.827632  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:15:05.827683  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.852368  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:05.969426  791016 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:15:05.974657  791016 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:15:05.974700  791016 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:15:05.974711  791016 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:15:05.974720  791016 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:15:05.974735  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:15:05.974814  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:15:05.974922  791016 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:15:05.974933  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:15:05.975061  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:15:05.987682  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:15:06.014751  791016 start.go:296] duration metric: took 187.179003ms for postStartSetup
	I0917 00:15:06.014841  791016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:15:06.014888  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:06.032229  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:06.127487  791016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:15:06.132280  791016 fix.go:56] duration metric: took 5.715000153s for fixHost
	I0917 00:15:06.132307  791016 start.go:83] releasing machines lock for "ha-198834-m02", held for 5.715051279s
	I0917 00:15:06.132377  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:15:06.151141  791016 out.go:179] * Found network options:
	I0917 00:15:06.152404  791016 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:15:06.153500  791016 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:15:06.153560  791016 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:15:06.153646  791016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:15:06.153693  791016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:15:06.153703  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:06.153775  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:06.172919  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:06.173221  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:06.336059  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:15:06.356573  791016 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:15:06.356644  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:15:06.367823  791016 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:15:06.367850  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:15:06.367879  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:15:06.368011  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:15:06.384865  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:15:06.395196  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:15:06.405823  791016 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:15:06.405897  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:15:06.416855  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:15:06.427229  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:15:06.437724  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:15:06.448158  791016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:15:06.457924  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:15:06.469234  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:15:06.479584  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:15:06.490186  791016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:15:06.499478  791016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:15:06.508117  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:06.649341  791016 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:15:06.856941  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:15:06.857001  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:15:06.857054  791016 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:15:06.871023  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:15:06.883130  791016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:15:06.904414  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:15:06.917467  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:15:06.929880  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:15:06.948179  791016 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:15:06.951952  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:15:06.962102  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:15:06.986553  791016 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:15:07.111853  791016 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:15:07.255138  791016 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:15:07.255189  791016 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:15:07.279181  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:15:07.291464  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:07.423129  791016 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:15:33.539124  791016 ssh_runner.go:235] Completed: sudo systemctl restart docker: (26.115947477s)
	I0917 00:15:33.539215  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:15:33.556589  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:15:33.573719  791016 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:15:33.598265  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:15:33.613037  791016 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:15:33.716455  791016 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:15:33.842930  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:33.991506  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:15:34.022454  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:15:34.043666  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:34.183399  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:15:34.334172  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:15:34.353300  791016 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:15:34.353475  791016 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:15:34.359509  791016 start.go:563] Will wait 60s for crictl version
	I0917 00:15:34.359581  791016 ssh_runner.go:195] Run: which crictl
	I0917 00:15:34.364611  791016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:15:34.433146  791016 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:15:34.433245  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:15:34.476733  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:15:34.524154  791016 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:15:34.526057  791016 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:15:34.527467  791016 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:15:34.552695  791016 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:15:34.559052  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:15:34.580385  791016 mustload.go:65] Loading cluster: ha-198834
	I0917 00:15:34.580685  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:34.581036  791016 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:15:34.616278  791016 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:15:34.616686  791016 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0917 00:15:34.616764  791016 certs.go:194] generating shared ca certs ...
	I0917 00:15:34.616799  791016 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:34.617054  791016 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:15:34.617163  791016 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:15:34.617177  791016 certs.go:256] generating profile certs ...
	I0917 00:15:34.617315  791016 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:15:34.617397  791016 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0917 00:15:34.617455  791016 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:15:34.617468  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:15:34.617485  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:15:34.617500  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:15:34.617515  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:15:34.617528  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:15:34.617543  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:15:34.617557  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:15:34.617570  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:15:34.617643  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:15:34.617688  791016 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:15:34.617699  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:15:34.617730  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:15:34.617774  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:15:34.617801  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:15:34.617876  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:15:34.617935  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:15:34.617957  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:15:34.617984  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:34.618057  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:15:34.644031  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:15:34.759250  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:15:34.766794  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:15:34.802705  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:15:34.814466  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:15:34.841530  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:15:34.848280  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:15:34.882344  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:15:34.894700  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:15:34.916869  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:15:34.923475  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:15:34.959856  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:15:34.971121  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:15:35.007770  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:15:35.090564  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:15:35.159352  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:15:35.215888  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:15:35.283778  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:15:35.346216  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:15:35.406460  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:15:35.478425  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:15:35.547961  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:15:35.625362  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:15:35.679806  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:15:35.729740  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:15:35.769544  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:15:35.819313  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:15:35.859183  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:15:35.904607  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:15:35.943073  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:15:36.017602  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:15:36.071338  791016 ssh_runner.go:195] Run: openssl version
	I0917 00:15:36.087808  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:15:36.119634  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:15:36.125999  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:15:36.126077  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:15:36.139804  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:15:36.169274  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:15:36.194768  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:15:36.203260  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:15:36.203337  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:15:36.220717  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:15:36.239759  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:15:36.273783  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:36.284643  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:36.284717  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:36.303099  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:15:36.318748  791016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:15:36.325645  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:15:36.339417  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:15:36.349966  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:15:36.364947  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:15:36.376365  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:15:36.392566  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:15:36.405182  791016 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0917 00:15:36.405340  791016 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:15:36.405377  791016 kube-vip.go:115] generating kube-vip config ...
	I0917 00:15:36.405560  791016 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:15:36.427797  791016 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:15:36.427973  791016 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:15:36.428079  791016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:15:36.445942  791016 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:15:36.446081  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:15:36.463378  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:15:36.495762  791016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:15:36.552099  791016 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:15:36.585385  791016 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:15:36.593638  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:15:36.613847  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:36.850964  791016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:15:36.870862  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:36.870544  791016 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:15:36.875616  791016 out.go:179] * Verifying Kubernetes components...
	I0917 00:15:36.876865  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:37.094569  791016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:15:37.164528  791016 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:15:37.164638  791016 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:15:37.165011  791016 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:15:43.121636  791016 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0917 00:15:43.121673  791016 node_ready.go:38] duration metric: took 5.95663549s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:15:43.121697  791016 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:15:43.121757  791016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:15:43.622856  791016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:15:43.635792  791016 api_server.go:72] duration metric: took 6.764839609s to wait for apiserver process to appear ...
	I0917 00:15:43.635819  791016 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:15:43.635843  791016 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:15:43.641470  791016 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:15:43.642433  791016 api_server.go:141] control plane version: v1.34.0
	I0917 00:15:43.642459  791016 api_server.go:131] duration metric: took 6.632404ms to wait for apiserver health ...
	I0917 00:15:43.642471  791016 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:15:43.648490  791016 system_pods.go:59] 24 kube-system pods found
	I0917 00:15:43.648534  791016 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:15:43.648540  791016 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:15:43.648546  791016 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:15:43.648549  791016 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:15:43.648552  791016 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:15:43.648555  791016 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:15:43.648558  791016 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:15:43.648562  791016 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:15:43.648566  791016 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:15:43.648571  791016 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:15:43.648575  791016 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:15:43.648580  791016 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:15:43.648585  791016 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:15:43.648590  791016 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:15:43.648594  791016 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:15:43.648598  791016 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:15:43.648603  791016 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0917 00:15:43.648607  791016 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:15:43.648612  791016 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:15:43.648616  791016 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:15:43.648624  791016 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:15:43.648629  791016 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:15:43.648634  791016 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:15:43.648637  791016 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:15:43.648642  791016 system_pods.go:74] duration metric: took 6.165191ms to wait for pod list to return data ...
	I0917 00:15:43.648650  791016 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:15:43.651851  791016 default_sa.go:45] found service account: "default"
	I0917 00:15:43.651875  791016 default_sa.go:55] duration metric: took 3.218931ms for default service account to be created ...
	I0917 00:15:43.651887  791016 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:15:43.666854  791016 system_pods.go:86] 24 kube-system pods found
	I0917 00:15:43.666888  791016 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:15:43.666897  791016 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:15:43.666919  791016 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:15:43.666925  791016 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:15:43.666930  791016 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:15:43.666935  791016 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:15:43.666940  791016 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:15:43.666945  791016 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:15:43.666951  791016 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:15:43.666956  791016 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:15:43.666961  791016 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:15:43.666967  791016 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:15:43.666972  791016 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:15:43.666977  791016 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:15:43.666983  791016 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:15:43.666987  791016 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:15:43.666992  791016 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0917 00:15:43.666997  791016 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:15:43.667001  791016 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:15:43.667005  791016 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:15:43.667011  791016 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:15:43.667016  791016 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:15:43.667022  791016 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:15:43.667026  791016 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:15:43.667035  791016 system_pods.go:126] duration metric: took 15.140554ms to wait for k8s-apps to be running ...
	I0917 00:15:43.667044  791016 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:15:43.667101  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:15:43.683924  791016 system_svc.go:56] duration metric: took 16.85635ms WaitForService to wait for kubelet
	I0917 00:15:43.683963  791016 kubeadm.go:578] duration metric: took 6.813012204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:15:43.683983  791016 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:15:43.687655  791016 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:15:43.687692  791016 node_conditions.go:123] node cpu capacity is 8
	I0917 00:15:43.687704  791016 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:15:43.687707  791016 node_conditions.go:123] node cpu capacity is 8
	I0917 00:15:43.687711  791016 node_conditions.go:105] duration metric: took 3.724125ms to run NodePressure ...
	I0917 00:15:43.687726  791016 start.go:241] waiting for startup goroutines ...
	I0917 00:15:43.687757  791016 start.go:255] writing updated cluster config ...
	I0917 00:15:43.692409  791016 out.go:203] 
	I0917 00:15:43.693954  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:43.694046  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:43.695802  791016 out.go:179] * Starting "ha-198834-m04" worker node in "ha-198834" cluster
	I0917 00:15:43.697154  791016 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:15:43.698391  791016 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:15:43.699490  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:15:43.699515  791016 cache.go:58] Caching tarball of preloaded images
	I0917 00:15:43.699526  791016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:15:43.699611  791016 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:15:43.699626  791016 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:15:43.699718  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:43.722121  791016 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:15:43.722140  791016 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:15:43.722155  791016 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:15:43.722183  791016 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:15:43.722250  791016 start.go:364] duration metric: took 48.37µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:15:43.722270  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:15:43.722287  791016 fix.go:54] fixHost starting: m04
	I0917 00:15:43.722532  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:15:43.742782  791016 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:15:43.742808  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:15:43.744738  791016 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:15:43.744819  791016 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:15:44.000115  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:15:44.021055  791016 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:15:44.021486  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:15:44.042936  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:44.043250  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:15:44.043333  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:15:44.062132  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:44.062387  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:15:44.062402  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:15:44.063000  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56636->127.0.0.1:32848: read: connection reset by peer
	I0917 00:15:47.098738  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:50.134044  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:53.170618  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:56.207829  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:59.244776  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:02.280526  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:05.317837  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:08.354212  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:11.390558  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:14.427425  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:17.463898  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:20.500642  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:23.536486  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:26.573742  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:29.610330  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:32.646092  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:35.682701  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:38.718396  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:41.756896  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:44.793218  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:47.828895  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:50.865320  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:53.903006  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:56.940127  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:59.977576  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:03.014141  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:06.050453  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:09.088080  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:12.123520  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:15.160855  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:18.197516  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:21.233522  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:24.273142  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:27.309508  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:30.347190  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:33.383802  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:36.420366  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:39.458309  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:42.494883  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:45.532591  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:48.569257  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:51.605240  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:54.643463  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:57.679270  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:00.715117  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:03.753429  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:06.791234  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:09.828293  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:12.864420  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:15.900178  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:18.937530  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:21.973106  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:25.010703  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:28.047334  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:31.083791  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:34.120370  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:37.157229  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:40.192699  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:43.229282  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:46.230993  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:18:46.231045  791016 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:18:46.231137  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:18:46.249452  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:18:46.249758  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:18:46.249779  791016 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m04 && echo "ha-198834-m04" | sudo tee /etc/hostname
	I0917 00:18:46.285458  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:49.322790  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:52.358771  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:55.395146  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:58.430990  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:01.466240  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:04.502702  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:07.538712  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:10.575350  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:13.611556  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:16.650467  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:19.687931  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:22.724753  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:25.762172  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:28.799003  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:31.836666  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:34.874298  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:37.910869  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:40.948145  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:43.984159  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:47.022839  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:50.060638  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:53.097103  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:56.134937  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:59.171175  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:02.206435  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:05.245050  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:08.281228  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:11.317768  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:14.354163  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:17.390637  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:20.427145  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:23.463915  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:26.501454  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:29.538076  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:32.574385  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:35.609462  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:38.646664  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:41.683135  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:44.720530  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:47.756951  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:50.793122  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:53.830325  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:56.867726  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:59.905155  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:02.940607  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:05.978211  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:09.016132  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:12.054483  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:15.091784  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:18.126886  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:21.163483  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:24.200471  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:27.237187  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:30.274503  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:33.310590  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:36.350543  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:39.386298  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:42.422623  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:45.460302  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:48.461450  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:21:48.461548  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:21:48.480778  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:21:48.481107  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:21:48.481140  791016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:21:48.516298  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:51.552071  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:54.590469  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:57.628670  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:00.664680  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:03.701437  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:06.739386  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:09.777268  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:12.813739  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:15.850785  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:18.887530  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:21.924310  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:24.959723  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:27.995646  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:31.031835  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:34.069898  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:37.106615  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:40.143875  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:43.182368  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:46.219887  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:49.255824  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:52.291827  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:55.329526  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:58.365554  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:01.402035  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:04.439392  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:07.475470  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:10.512193  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:13.547889  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:16.587547  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:19.625956  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:22.661476  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:25.699040  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:28.736086  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:31.772368  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:34.810879  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:37.846288  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:40.881735  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:43.918521  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:46.956842  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:49.994277  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:53.030898  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:56.068948  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:59.105416  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:02.141467  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:05.178642  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:08.214731  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:11.250406  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:14.288478  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:17.324824  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:20.360203  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:23.397681  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:26.434358  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:29.471836  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:32.509568  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:35.551457  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:38.589282  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:41.626303  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:44.664281  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:47.700228  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:50.700391  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:24:50.700433  791016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:24:50.700458  791016 ubuntu.go:190] setting up certificates
	I0917 00:24:50.700479  791016 provision.go:84] configureAuth start
	I0917 00:24:50.700558  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:50.720896  791016 provision.go:143] copyHostCerts
	I0917 00:24:50.720966  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:50.721019  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:50.721032  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:50.721163  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:50.721268  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:50.721289  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:50.721297  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:50.721328  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:50.721378  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:50.721395  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:50.721401  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:50.721423  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:50.721478  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:24:51.285190  791016 provision.go:177] copyRemoteCerts
	I0917 00:24:51.285253  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:24:51.285291  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:51.303249  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:51.338957  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:51.339061  791016 retry.go:31] will retry after 281.845457ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:51.657087  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:51.657127  791016 retry.go:31] will retry after 421.456805ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:52.114872  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.114918  791016 retry.go:31] will retry after 612.457307ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:52.764688  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.764777  791016 retry.go:31] will retry after 197.965238ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.963134  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:52.980867  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:53.017119  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:53.017161  791016 retry.go:31] will retry after 209.678413ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:53.263937  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:53.263979  791016 retry.go:31] will retry after 526.783878ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:53.827994  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:53.828035  791016 retry.go:31] will retry after 448.495953ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:54.313198  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.313318  791016 provision.go:87] duration metric: took 3.61282574s to configureAuth
	W0917 00:24:54.313335  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.313352  791016 retry.go:31] will retry after 103.827µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.314491  791016 provision.go:84] configureAuth start
	I0917 00:24:54.314577  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:54.333347  791016 provision.go:143] copyHostCerts
	I0917 00:24:54.333388  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:54.333417  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:54.333426  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:54.333484  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:54.333574  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:54.333593  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:54.333600  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:54.333622  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:54.333681  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:54.333697  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:54.333704  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:54.333722  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:54.333786  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:24:54.370138  791016 provision.go:177] copyRemoteCerts
	I0917 00:24:54.370200  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:24:54.370238  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:54.388376  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:54.425543  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.425585  791016 retry.go:31] will retry after 355.710441ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:54.818272  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.818309  791016 retry.go:31] will retry after 403.920682ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:55.258709  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:55.258740  791016 retry.go:31] will retry after 317.009231ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:55.612188  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:55.612218  791016 retry.go:31] will retry after 534.541777ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:56.182325  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.182427  791016 provision.go:87] duration metric: took 1.867913877s to configureAuth
	W0917 00:24:56.182442  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.182458  791016 retry.go:31] will retry after 201.235µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.183625  791016 provision.go:84] configureAuth start
	I0917 00:24:56.183706  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:56.200713  791016 provision.go:143] copyHostCerts
	I0917 00:24:56.200751  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:56.200784  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:56.200793  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:56.200853  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:56.200980  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:56.201007  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:56.201015  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:56.201041  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:56.201111  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:56.201127  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:56.201154  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:56.201176  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:56.201242  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:24:56.655526  791016 provision.go:177] copyRemoteCerts
	I0917 00:24:56.655593  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:24:56.655639  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:56.674005  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:56.709386  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.709416  791016 retry.go:31] will retry after 171.720164ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:56.917745  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.917783  791016 retry.go:31] will retry after 188.501002ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:57.143392  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:57.143424  791016 retry.go:31] will retry after 332.534047ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:57.512832  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:57.512864  791016 retry.go:31] will retry after 695.94873ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:58.244519  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.244600  791016 retry.go:31] will retry after 170.395165ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.416061  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:58.433852  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:58.469453  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.469485  791016 retry.go:31] will retry after 314.919798ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:58.820406  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.820439  791016 retry.go:31] will retry after 336.705475ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:59.193385  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.193414  791016 retry.go:31] will retry after 746.821803ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:59.978043  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.978147  791016 provision.go:87] duration metric: took 3.794501473s to configureAuth
	W0917 00:24:59.978178  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.978194  791016 retry.go:31] will retry after 180.554µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.979340  791016 provision.go:84] configureAuth start
	I0917 00:24:59.979415  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:59.996428  791016 provision.go:143] copyHostCerts
	I0917 00:24:59.996473  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:59.996506  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:59.996516  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:59.996573  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:59.996662  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:59.996691  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:59.996697  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:59.996722  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:59.996780  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:59.996797  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:59.996800  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:59.996818  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:59.996881  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:00.210182  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:00.210246  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:00.210288  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:00.228967  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:00.264547  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:00.264579  791016 retry.go:31] will retry after 249.421954ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:00.550784  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:00.550815  791016 retry.go:31] will retry after 446.501241ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:01.033319  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.033351  791016 retry.go:31] will retry after 322.057737ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:01.391567  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.391664  791016 retry.go:31] will retry after 258.753859ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.651131  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:01.668829  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:01.705294  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.705328  791016 retry.go:31] will retry after 147.654759ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:01.889504  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.889556  791016 retry.go:31] will retry after 321.004527ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:02.247226  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:02.247256  791016 retry.go:31] will retry after 286.119197ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:02.568952  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:02.568987  791016 retry.go:31] will retry after 792.931835ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:03.398850  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.398968  791016 provision.go:87] duration metric: took 3.419601273s to configureAuth
	W0917 00:25:03.398980  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.398992  791016 retry.go:31] will retry after 275.728µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.400129  791016 provision.go:84] configureAuth start
	I0917 00:25:03.400208  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:03.417859  791016 provision.go:143] copyHostCerts
	I0917 00:25:03.417895  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:03.417951  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:03.417970  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:03.418027  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:03.418116  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:03.418139  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:03.418145  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:03.418169  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:03.418230  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:03.418248  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:03.418252  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:03.418270  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:03.418334  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:03.710212  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:03.710280  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:03.710316  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:03.729281  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:03.765486  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.765523  791016 retry.go:31] will retry after 234.355448ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:04.037830  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:04.037865  791016 retry.go:31] will retry after 202.71283ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:04.277687  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:04.277721  791016 retry.go:31] will retry after 699.043005ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:05.012602  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.012635  791016 retry.go:31] will retry after 683.45052ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:05.732161  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.732267  791016 provision.go:87] duration metric: took 2.332116129s to configureAuth
	W0917 00:25:05.732281  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.732313  791016 retry.go:31] will retry after 408.117µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.733412  791016 provision.go:84] configureAuth start
	I0917 00:25:05.733507  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:05.751337  791016 provision.go:143] copyHostCerts
	I0917 00:25:05.751373  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:05.751404  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:05.751425  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:05.751483  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:05.751611  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:05.751634  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:05.751639  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:05.751673  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:05.751745  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:05.751763  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:05.751767  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:05.751788  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:05.751854  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:06.013451  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:06.013524  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:06.013572  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:06.031320  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:06.068487  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:06.068521  791016 retry.go:31] will retry after 199.995997ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:06.304427  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:06.304462  791016 retry.go:31] will retry after 428.334269ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:06.768652  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:06.768689  791016 retry.go:31] will retry after 282.250622ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:07.088533  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.088624  791016 retry.go:31] will retry after 130.195743ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.219926  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:07.237696  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:07.273350  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.273381  791016 retry.go:31] will retry after 332.263248ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:07.641362  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.641394  791016 retry.go:31] will retry after 219.825801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:07.897344  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.897384  791016 retry.go:31] will retry after 289.760844ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:08.223698  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:08.223734  791016 retry.go:31] will retry after 931.250784ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:09.191398  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.191617  791016 provision.go:87] duration metric: took 3.458158315s to configureAuth
	W0917 00:25:09.191645  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.191665  791016 retry.go:31] will retry after 486.462µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.192804  791016 provision.go:84] configureAuth start
	I0917 00:25:09.192898  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:09.210285  791016 provision.go:143] copyHostCerts
	I0917 00:25:09.210330  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:09.210368  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:09.210381  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:09.210454  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:09.210575  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:09.210607  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:09.210615  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:09.210655  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:09.210738  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:09.210761  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:09.210767  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:09.210798  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:09.210888  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:09.663367  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:09.663424  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:09.663472  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:09.683494  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:09.719268  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.719298  791016 retry.go:31] will retry after 199.262805ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:09.953459  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.953491  791016 retry.go:31] will retry after 204.479137ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:10.194710  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:10.194766  791016 retry.go:31] will retry after 758.559532ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:10.989359  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:10.989439  791016 retry.go:31] will retry after 370.221733ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:11.360025  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:11.377052  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:11.412480  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:11.412512  791016 retry.go:31] will retry after 329.383966ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:11.777745  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:11.777791  791016 retry.go:31] will retry after 269.690913ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:12.083866  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:12.083920  791016 retry.go:31] will retry after 572.239384ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:12.694586  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:12.694623  791016 retry.go:31] will retry after 464.05197ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:13.195486  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.195578  791016 provision.go:87] duration metric: took 4.002742375s to configureAuth
	W0917 00:25:13.195591  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.195605  791016 retry.go:31] will retry after 780.686µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.196735  791016 provision.go:84] configureAuth start
	I0917 00:25:13.196827  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:13.214509  791016 provision.go:143] copyHostCerts
	I0917 00:25:13.214547  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:13.214583  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:13.214596  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:13.214653  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:13.214774  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:13.214804  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:13.214814  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:13.214855  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:13.214977  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:13.215005  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:13.215015  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:13.215051  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:13.215146  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:13.649513  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:13.649599  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:13.649651  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:13.667345  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:13.703380  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.703405  791016 retry.go:31] will retry after 313.64163ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:14.053145  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:14.053179  791016 retry.go:31] will retry after 317.387612ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:14.406606  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:14.406635  791016 retry.go:31] will retry after 566.64859ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:15.009997  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.010090  791016 retry.go:31] will retry after 196.134619ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.206496  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:15.225650  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:15.261454  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.261483  791016 retry.go:31] will retry after 245.022682ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:15.541833  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.541868  791016 retry.go:31] will retry after 322.443288ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:15.900997  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.901029  791016 retry.go:31] will retry after 516.015576ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:16.453598  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.453713  791016 provision.go:87] duration metric: took 3.256958214s to configureAuth
	W0917 00:25:16.453726  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.453747  791016 retry.go:31] will retry after 2.333678ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.456970  791016 provision.go:84] configureAuth start
	I0917 00:25:16.457049  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:16.474811  791016 provision.go:143] copyHostCerts
	I0917 00:25:16.474850  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:16.474886  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:16.474898  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:16.474982  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:16.475066  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:16.475089  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:16.475094  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:16.475116  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:16.475200  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:16.475229  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:16.475235  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:16.475255  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:16.475307  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:16.799509  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:16.799573  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:16.799610  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:16.817674  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:16.853071  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.853103  791016 retry.go:31] will retry after 168.122328ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:17.056441  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:17.056479  791016 retry.go:31] will retry after 382.833105ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:17.475972  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:17.476010  791016 retry.go:31] will retry after 655.886733ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:18.168049  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.168130  791016 retry.go:31] will retry after 198.307554ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.367629  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:18.385594  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:18.421176  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.421209  791016 retry.go:31] will retry after 338.713182ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:18.796178  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.796221  791016 retry.go:31] will retry after 259.124236ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:19.090799  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.090830  791016 retry.go:31] will retry after 349.555843ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:19.476895  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.477033  791016 provision.go:87] duration metric: took 3.020038692s to configureAuth
	W0917 00:25:19.477045  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.477057  791016 retry.go:31] will retry after 2.091895ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.479197  791016 provision.go:84] configureAuth start
	I0917 00:25:19.479286  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:19.496629  791016 provision.go:143] copyHostCerts
	I0917 00:25:19.496665  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:19.496694  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:19.496703  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:19.496759  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:19.496860  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:19.496891  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:19.496902  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:19.496961  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:19.497023  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:19.497040  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:19.497047  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:19.497067  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:19.497118  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:19.683726  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:19.683783  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:19.683827  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:19.700779  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:19.736622  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.736652  791016 retry.go:31] will retry after 290.006963ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:20.062197  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:20.062228  791016 retry.go:31] will retry after 316.758379ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:20.414876  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:20.414939  791016 retry.go:31] will retry after 431.588331ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:20.882854  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:20.882887  791016 retry.go:31] will retry after 517.588716ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:21.436944  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.437046  791016 provision.go:87] duration metric: took 1.957814295s to configureAuth
	W0917 00:25:21.437060  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.437075  791016 retry.go:31] will retry after 4.850853ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.442274  791016 provision.go:84] configureAuth start
	I0917 00:25:21.442352  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:21.459812  791016 provision.go:143] copyHostCerts
	I0917 00:25:21.459852  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:21.459889  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:21.459915  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:21.459981  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:21.460083  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:21.460111  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:21.460118  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:21.460154  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:21.460230  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:21.460255  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:21.460265  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:21.460298  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:21.460379  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:21.629816  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:21.629893  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:21.629972  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:21.647193  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:21.682201  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.682241  791016 retry.go:31] will retry after 126.915617ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:21.845401  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.845437  791016 retry.go:31] will retry after 469.570747ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:22.351442  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:22.351471  791016 retry.go:31] will retry after 507.616138ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:22.895718  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:22.895753  791016 retry.go:31] will retry after 740.220603ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:23.672589  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:23.672691  791016 provision.go:87] duration metric: took 2.230395673s to configureAuth
	W0917 00:25:23.672706  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:23.672720  791016 retry.go:31] will retry after 6.27654ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:23.679963  791016 provision.go:84] configureAuth start
	I0917 00:25:23.680048  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:23.698019  791016 provision.go:143] copyHostCerts
	I0917 00:25:23.698055  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:23.698084  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:23.698094  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:23.698150  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:23.698231  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:23.698249  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:23.698256  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:23.698277  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:23.698337  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:23.698355  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:23.698361  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:23.698380  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:23.698429  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:24.029562  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:24.029627  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:24.029680  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:24.047756  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:24.083251  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:24.083280  791016 retry.go:31] will retry after 306.883934ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:24.426968  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:24.427002  791016 retry.go:31] will retry after 551.664172ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:25.015455  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.015500  791016 retry.go:31] will retry after 393.354081ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:25.443750  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.443843  791016 retry.go:31] will retry after 209.025309ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.653338  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:25.671012  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:25.706190  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.706224  791016 retry.go:31] will retry after 337.41418ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:26.080787  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:26.080820  791016 retry.go:31] will retry after 315.469689ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:26.432569  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:26.432605  791016 retry.go:31] will retry after 312.231441ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:26.780798  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:26.780835  791016 retry.go:31] will retry after 483.843039ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:27.300600  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.300689  791016 provision.go:87] duration metric: took 3.620701809s to configureAuth
	W0917 00:25:27.300702  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.300718  791016 retry.go:31] will retry after 11.695348ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.312945  791016 provision.go:84] configureAuth start
	I0917 00:25:27.313032  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:27.330556  791016 provision.go:143] copyHostCerts
	I0917 00:25:27.330602  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:27.330635  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:27.330647  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:27.330822  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:27.331014  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:27.331047  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:27.331058  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:27.331099  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:27.331173  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:27.331200  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:27.331210  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:27.331244  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:27.331317  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:27.629838  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:27.629916  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:27.629953  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:27.647177  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:27.683485  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.683515  791016 retry.go:31] will retry after 138.123235ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:27.856935  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.856982  791016 retry.go:31] will retry after 436.619432ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:28.329524  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:28.329555  791016 retry.go:31] will retry after 467.020117ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:28.833937  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:28.834026  791016 retry.go:31] will retry after 183.2183ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.017423  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:29.034786  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:29.069940  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.069976  791016 retry.go:31] will retry after 166.546001ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:29.272729  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.272758  791016 retry.go:31] will retry after 198.842029ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:29.507282  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.507312  791016 retry.go:31] will retry after 555.640977ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:30.100015  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.100125  791016 provision.go:87] duration metric: took 2.787150007s to configureAuth
	W0917 00:25:30.100139  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.100157  791016 retry.go:31] will retry after 11.223573ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.112404  791016 provision.go:84] configureAuth start
	I0917 00:25:30.112484  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:30.131065  791016 provision.go:143] copyHostCerts
	I0917 00:25:30.131109  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:30.131147  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:30.131156  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:30.131248  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:30.131358  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:30.131386  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:30.131393  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:30.131431  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:30.131524  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:30.131544  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:30.131551  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:30.131573  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:30.131628  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:30.575141  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:30.575202  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:30.575252  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:30.592938  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:30.628649  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.628681  791016 retry.go:31] will retry after 366.602106ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:31.031508  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:31.031550  791016 retry.go:31] will retry after 275.917946ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:31.347177  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:31.347212  791016 retry.go:31] will retry after 745.1072ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:32.128387  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.128472  791016 retry.go:31] will retry after 200.656021ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.329946  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:32.347751  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:32.383095  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.383126  791016 retry.go:31] will retry after 270.30765ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:32.689393  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.689427  791016 retry.go:31] will retry after 386.377583ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:33.111945  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.111985  791016 retry.go:31] will retry after 779.601898ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:33.927500  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.927602  791016 provision.go:87] duration metric: took 3.815171721s to configureAuth
	W0917 00:25:33.927617  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.927631  791016 retry.go:31] will retry after 25.310066ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.953841  791016 provision.go:84] configureAuth start
	I0917 00:25:33.953971  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:33.971697  791016 provision.go:143] copyHostCerts
	I0917 00:25:33.971740  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:33.971778  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:33.971790  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:33.971858  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:33.971998  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:33.972029  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:33.972037  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:33.972076  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:33.972149  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:33.972177  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:33.972185  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:33.972232  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:33.972310  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:34.221689  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:34.221771  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:34.221812  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:34.239922  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:34.276302  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:34.276335  791016 retry.go:31] will retry after 202.741431ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:34.515638  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:34.515671  791016 retry.go:31] will retry after 330.700518ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:34.886306  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:34.886340  791016 retry.go:31] will retry after 464.499956ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:35.387217  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:35.387246  791016 retry.go:31] will retry after 834.737314ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:36.257758  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.257850  791016 provision.go:87] duration metric: took 2.303985725s to configureAuth
	W0917 00:25:36.257863  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.257878  791016 retry.go:31] will retry after 14.936659ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.273125  791016 provision.go:84] configureAuth start
	I0917 00:25:36.273246  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:36.290248  791016 provision.go:143] copyHostCerts
	I0917 00:25:36.290292  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:36.290321  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:36.290332  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:36.290396  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:36.290473  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:36.290492  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:36.290496  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:36.290517  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:36.290572  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:36.290590  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:36.290596  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:36.290616  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:36.290666  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:36.498343  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:36.498408  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:36.498443  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:36.515898  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:36.552435  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.552465  791016 retry.go:31] will retry after 180.61757ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:36.769325  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.769358  791016 retry.go:31] will retry after 562.132822ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:37.368228  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:37.368264  791016 retry.go:31] will retry after 544.785898ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:37.949256  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:37.949359  791016 retry.go:31] will retry after 128.292209ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.078770  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:38.097675  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:38.133371  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.133402  791016 retry.go:31] will retry after 352.391784ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:38.521888  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.521955  791016 retry.go:31] will retry after 460.42605ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:39.018110  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.018148  791016 retry.go:31] will retry after 387.428687ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:39.441428  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.441509  791016 provision.go:87] duration metric: took 3.168355202s to configureAuth
	W0917 00:25:39.441518  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.441529  791016 retry.go:31] will retry after 29.479848ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.471745  791016 provision.go:84] configureAuth start
	I0917 00:25:39.471861  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:39.489985  791016 provision.go:143] copyHostCerts
	I0917 00:25:39.490027  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:39.490063  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:39.490073  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:39.490138  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:39.490218  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:39.490235  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:39.490242  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:39.490263  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:39.490310  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:39.490326  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:39.490332  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:39.490353  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:39.490429  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:39.837444  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:39.837517  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:39.837561  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:39.855699  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:39.892374  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.892402  791016 retry.go:31] will retry after 339.257174ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:40.267805  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:40.267841  791016 retry.go:31] will retry after 430.368382ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:40.733710  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:40.733747  791016 retry.go:31] will retry after 574.039985ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:41.344413  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.344514  791016 retry.go:31] will retry after 220.875911ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.566059  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:41.583584  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:41.620538  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.620570  791016 retry.go:31] will retry after 285.005928ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:41.941411  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.941447  791016 retry.go:31] will retry after 277.918377ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:42.255712  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:42.255751  791016 retry.go:31] will retry after 471.129173ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:42.762606  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:42.762646  791016 retry.go:31] will retry after 740.22815ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:43.538086  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.538195  791016 provision.go:87] duration metric: took 4.06640215s to configureAuth
	W0917 00:25:43.538209  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.538230  791016 retry.go:31] will retry after 37.466882ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.576470  791016 provision.go:84] configureAuth start
	I0917 00:25:43.576585  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:43.594402  791016 provision.go:143] copyHostCerts
	I0917 00:25:43.594468  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:43.594516  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:43.594529  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:43.594592  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:43.594691  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:43.594717  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:43.594725  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:43.594761  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:43.594830  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:43.594855  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:43.594864  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:43.594894  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:43.595010  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:43.877318  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:43.877382  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:43.877416  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:43.896784  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:43.932298  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.932330  791016 retry.go:31] will retry after 235.376507ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:44.204731  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.204762  791016 retry.go:31] will retry after 243.192801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:44.484438  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.484471  791016 retry.go:31] will retry after 467.521838ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:44.987761  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.987792  791016 retry.go:31] will retry after 440.455179ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:45.464476  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.464564  791016 retry.go:31] will retry after 235.586322ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.700291  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:45.717698  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:45.754106  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.754133  791016 retry.go:31] will retry after 195.54121ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:45.985831  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.985859  791016 retry.go:31] will retry after 418.816392ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:46.441057  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.441088  791016 retry.go:31] will retry after 374.559798ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:46.852817  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.852921  791016 provision.go:87] duration metric: took 3.276390875s to configureAuth
	W0917 00:25:46.852942  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.852959  791016 retry.go:31] will retry after 83.017266ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.936222  791016 provision.go:84] configureAuth start
	I0917 00:25:46.936327  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:46.953837  791016 provision.go:143] copyHostCerts
	I0917 00:25:46.953876  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:46.953916  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:46.953926  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:46.953994  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:46.954075  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:46.954100  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:46.954107  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:46.954129  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:46.954173  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:46.954192  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:46.954197  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:46.954217  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:46.954267  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:47.247232  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:47.247295  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:47.247330  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:47.264843  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:47.300678  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.300707  791016 retry.go:31] will retry after 180.912565ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:47.518282  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.518316  791016 retry.go:31] will retry after 370.390241ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:47.924210  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.924247  791016 retry.go:31] will retry after 540.421858ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:48.500616  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:48.500710  791016 retry.go:31] will retry after 231.87747ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:48.733102  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:48.751254  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:48.787314  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:48.787350  791016 retry.go:31] will retry after 259.477269ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:49.083609  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:49.083645  791016 retry.go:31] will retry after 362.863033ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:49.482344  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:49.482375  791016 retry.go:31] will retry after 826.172003ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:50.344419  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.344523  791016 provision.go:87] duration metric: took 3.408273466s to configureAuth
	W0917 00:25:50.344542  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.344557  791016 retry.go:31] will retry after 143.029284ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.487719  791016 provision.go:84] configureAuth start
	I0917 00:25:50.487824  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:50.506244  791016 provision.go:143] copyHostCerts
	I0917 00:25:50.506288  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:50.506326  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:50.506339  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:50.506410  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:50.506525  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:50.506545  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:50.506549  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:50.506571  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:50.506615  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:50.506632  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:50.506638  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:50.506657  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:50.506706  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:51.057390  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:51.057452  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:51.057498  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:51.075369  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:51.110807  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:51.110841  791016 retry.go:31] will retry after 261.420902ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:51.408489  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:51.408521  791016 retry.go:31] will retry after 457.951037ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:51.902924  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:51.902960  791016 retry.go:31] will retry after 555.51636ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:52.494245  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:52.494277  791016 retry.go:31] will retry after 502.444655ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:53.032775  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.032876  791016 provision.go:87] duration metric: took 2.545127126s to configureAuth
	W0917 00:25:53.032887  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.032922  791016 retry.go:31] will retry after 132.723075ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.166223  791016 provision.go:84] configureAuth start
	I0917 00:25:53.166336  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:53.184729  791016 provision.go:143] copyHostCerts
	I0917 00:25:53.184765  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:53.184796  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:53.184806  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:53.184861  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:53.185004  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:53.185026  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:53.185031  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:53.185056  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:53.185106  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:53.185131  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:53.185137  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:53.185169  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:53.185241  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:53.405462  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:53.405529  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:53.405566  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:53.423724  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:53.460846  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.460884  791016 retry.go:31] will retry after 358.963449ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:53.855720  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.855751  791016 retry.go:31] will retry after 447.864842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:54.340046  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:54.340083  791016 retry.go:31] will retry after 421.446936ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:54.797579  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:54.797675  791016 retry.go:31] will retry after 353.967853ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.152271  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:55.169297  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:55.204258  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.204297  791016 retry.go:31] will retry after 278.657731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:55.519656  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.519692  791016 retry.go:31] will retry after 208.336638ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:55.764495  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.764531  791016 retry.go:31] will retry after 283.294437ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:56.084010  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:56.084045  791016 retry.go:31] will retry after 1.063783665s: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:57.184001  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:57.184096  791016 provision.go:87] duration metric: took 4.017831036s to configureAuth
	W0917 00:25:57.184108  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:57.184119  791016 retry.go:31] will retry after 378.928957ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:57.563691  791016 provision.go:84] configureAuth start
	I0917 00:25:57.563778  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:57.581822  791016 provision.go:143] copyHostCerts
	I0917 00:25:57.581868  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:57.581899  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:57.581923  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:57.581992  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:57.582073  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:57.582095  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:57.582102  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:57.582127  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:57.582173  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:57.582197  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:57.582203  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:57.582221  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:57.582309  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:58.132305  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:58.132367  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:58.132400  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:58.149902  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:58.185053  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:58.185090  791016 retry.go:31] will retry after 212.996056ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:58.433521  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:58.433549  791016 retry.go:31] will retry after 216.913128ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:58.686168  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:58.686197  791016 retry.go:31] will retry after 655.131011ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:59.377369  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.377450  791016 retry.go:31] will retry after 205.148257ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.582864  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:59.604554  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:59.640460  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.640495  791016 retry.go:31] will retry after 329.057785ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:00.007985  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:00.008024  791016 retry.go:31] will retry after 536.951443ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:00.581652  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:00.581690  791016 retry.go:31] will retry after 536.690401ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:01.155338  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:01.155433  791016 provision.go:87] duration metric: took 3.591713623s to configureAuth
	W0917 00:26:01.155445  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:01.155462  791016 retry.go:31] will retry after 622.316963ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:01.777972  791016 provision.go:84] configureAuth start
	I0917 00:26:01.778089  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:01.796128  791016 provision.go:143] copyHostCerts
	I0917 00:26:01.796164  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:01.796194  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:01.796201  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:01.796259  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:01.796336  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:01.796354  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:01.796361  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:01.796381  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:01.796425  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:01.796441  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:01.796447  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:01.796466  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:01.796523  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:02.270708  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:02.270783  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:02.270825  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:02.289557  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:02.325352  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.325380  791016 retry.go:31] will retry after 165.164388ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:02.526457  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.526494  791016 retry.go:31] will retry after 421.940684ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:02.985238  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.985271  791016 retry.go:31] will retry after 756.233115ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:03.777794  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:03.777898  791016 retry.go:31] will retry after 362.951024ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.141610  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:04.159786  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:04.196169  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.196203  791016 retry.go:31] will retry after 352.114514ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:04.584706  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.584741  791016 retry.go:31] will retry after 236.165759ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:04.856300  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.856330  791016 retry.go:31] will retry after 329.150146ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:05.220887  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:05.220941  791016 retry.go:31] will retry after 832.300856ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:06.089574  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:06.089684  791016 provision.go:87] duration metric: took 4.311683722s to configureAuth
	W0917 00:26:06.089698  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:06.089711  791016 retry.go:31] will retry after 747.062346ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:06.837550  791016 provision.go:84] configureAuth start
	I0917 00:26:06.837663  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:06.854985  791016 provision.go:143] copyHostCerts
	I0917 00:26:06.855025  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:06.855062  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:06.855077  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:06.855162  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:06.855261  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:06.855289  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:06.855313  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:06.855351  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:06.855415  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:06.855439  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:06.855447  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:06.855473  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:06.855543  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:07.186545  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:07.186614  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:07.186651  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:07.204967  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:07.240895  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:07.240993  791016 retry.go:31] will retry after 168.762413ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:07.446091  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:07.446121  791016 retry.go:31] will retry after 434.540683ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:07.917493  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:07.917531  791016 retry.go:31] will retry after 701.606273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:08.655641  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:08.655724  791016 retry.go:31] will retry after 320.530213ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:08.977392  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:08.995492  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:09.031193  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:09.031221  791016 retry.go:31] will retry after 191.167982ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:09.258892  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:09.258953  791016 retry.go:31] will retry after 454.439774ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:09.749896  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:09.749949  791016 retry.go:31] will retry after 825.076652ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:10.611548  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:10.611625  791016 provision.go:87] duration metric: took 3.774028836s to configureAuth
	W0917 00:26:10.611634  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:10.611644  791016 retry.go:31] will retry after 1.309627243s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:11.922057  791016 provision.go:84] configureAuth start
	I0917 00:26:11.922182  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:11.939828  791016 provision.go:143] copyHostCerts
	I0917 00:26:11.939864  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:11.939891  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:11.939898  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:11.939986  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:11.940075  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:11.940094  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:11.940101  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:11.940123  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:11.940169  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:11.940191  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:11.940198  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:11.940217  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:11.940303  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:12.110010  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:12.110072  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:12.110108  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:12.128184  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:12.164303  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:12.164340  791016 retry.go:31] will retry after 339.722995ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:12.540417  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:12.540451  791016 retry.go:31] will retry after 335.702574ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:12.911688  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:12.911716  791016 retry.go:31] will retry after 605.279338ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:13.552353  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:13.552425  791016 retry.go:31] will retry after 229.36283ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:13.782969  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:13.803921  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:13.840242  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:13.840277  791016 retry.go:31] will retry after 206.955206ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:14.084438  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:14.084468  791016 retry.go:31] will retry after 289.625439ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:14.410419  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:14.410451  791016 retry.go:31] will retry after 792.244108ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:15.238421  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:15.238506  791016 provision.go:87] duration metric: took 3.316415805s to configureAuth
	W0917 00:26:15.238518  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:15.238536  791016 retry.go:31] will retry after 2.156331292s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.395523  791016 provision.go:84] configureAuth start
	I0917 00:26:17.395612  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:17.413608  791016 provision.go:143] copyHostCerts
	I0917 00:26:17.413651  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:17.413683  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:17.413693  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:17.413747  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:17.413841  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:17.413863  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:17.413869  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:17.413891  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:17.413973  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:17.413992  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:17.414000  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:17.414021  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:17.414073  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:17.562638  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:17.562714  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:17.562769  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:17.581673  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:17.619191  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.619225  791016 retry.go:31] will retry after 169.359395ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:17.824944  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.824986  791016 retry.go:31] will retry after 561.831267ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:18.424226  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:18.424260  791016 retry.go:31] will retry after 531.694204ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:18.992199  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:18.992233  791016 retry.go:31] will retry after 494.76273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:19.523693  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:19.523772  791016 provision.go:87] duration metric: took 2.128222413s to configureAuth
	W0917 00:26:19.523787  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:19.523798  791016 retry.go:31] will retry after 3.318889156s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:22.843734  791016 provision.go:84] configureAuth start
	I0917 00:26:22.843830  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:22.861152  791016 provision.go:143] copyHostCerts
	I0917 00:26:22.861191  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:22.861227  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:22.861236  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:22.861288  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:22.861367  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:22.861386  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:22.861393  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:22.861415  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:22.861459  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:22.861475  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:22.861481  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:22.861499  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:22.861601  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:23.052424  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:23.052485  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:23.052521  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:23.069689  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:23.105081  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.105108  791016 retry.go:31] will retry after 349.300156ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:23.490547  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.490580  791016 retry.go:31] will retry after 224.689981ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:23.754667  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.754699  791016 retry.go:31] will retry after 397.257295ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:24.188087  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.188181  791016 retry.go:31] will retry after 233.82005ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.422610  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:24.441161  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:24.477396  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.477429  791016 retry.go:31] will retry after 217.93614ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:24.731162  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.731195  791016 retry.go:31] will retry after 543.106744ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:25.310425  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:25.310458  791016 retry.go:31] will retry after 677.952876ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:26.025241  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:26.025345  791016 provision.go:87] duration metric: took 3.181582431s to configureAuth
	W0917 00:26:26.025358  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:26.025378  791016 retry.go:31] will retry after 2.937511032s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:28.964067  791016 provision.go:84] configureAuth start
	I0917 00:26:28.964159  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:28.981410  791016 provision.go:143] copyHostCerts
	I0917 00:26:28.981446  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:28.981476  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:28.981485  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:28.981541  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:28.981616  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:28.981636  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:28.981643  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:28.981663  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:28.981706  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:28.981725  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:28.981731  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:28.981752  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:28.981803  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:29.817472  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:29.817531  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:29.817565  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:29.836010  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:29.871566  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:29.871593  791016 retry.go:31] will retry after 365.955083ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:30.273441  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:30.273480  791016 retry.go:31] will retry after 299.47315ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:30.609936  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:30.609981  791016 retry.go:31] will retry after 464.139848ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:31.110461  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.110559  791016 retry.go:31] will retry after 281.938805ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.393153  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:31.412126  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:31.448031  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.448060  791016 retry.go:31] will retry after 240.674801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:31.726392  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.726437  791016 retry.go:31] will retry after 519.604443ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:32.282247  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:32.282275  791016 retry.go:31] will retry after 382.48499ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:32.701204  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:32.701236  791016 retry.go:31] will retry after 692.255212ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:33.429731  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:33.429835  791016 provision.go:87] duration metric: took 4.465739293s to configureAuth
	W0917 00:26:33.429848  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:33.429863  791016 retry.go:31] will retry after 5.272755601s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:38.703985  791016 provision.go:84] configureAuth start
	I0917 00:26:38.704103  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:38.721357  791016 provision.go:143] copyHostCerts
	I0917 00:26:38.721395  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:38.721432  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:38.721441  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:38.721519  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:38.721609  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:38.721630  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:38.721637  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:38.721663  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:38.721708  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:38.721725  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:38.721731  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:38.721749  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:38.721830  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:38.866248  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:38.866317  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:38.866370  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:38.884241  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:38.919665  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:38.919696  791016 retry.go:31] will retry after 235.506838ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:39.191745  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:39.191789  791016 retry.go:31] will retry after 390.014802ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:39.619248  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:39.619277  791016 retry.go:31] will retry after 571.493485ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:40.225994  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.226067  791016 retry.go:31] will retry after 216.613249ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.443463  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:40.462158  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:40.498610  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.498658  791016 retry.go:31] will retry after 374.596845ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:40.909441  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.909473  791016 retry.go:31] will retry after 298.991353ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:41.245148  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:41.245180  791016 retry.go:31] will retry after 514.820757ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:41.797231  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:41.797273  791016 retry.go:31] will retry after 582.996085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:42.417629  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.417736  791016 provision.go:87] duration metric: took 3.713721614s to configureAuth
	W0917 00:26:42.417749  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.417763  791016 ubuntu.go:202] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.417777  791016 machine.go:96] duration metric: took 10m58.374511119s to provisionDockerMachine
	I0917 00:26:42.417855  791016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:26:42.417888  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:42.435191  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:42.470768  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.470801  791016 retry.go:31] will retry after 345.968132ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:42.853264  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.853295  791016 retry.go:31] will retry after 554.061651ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:43.443002  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:43.443035  791016 retry.go:31] will retry after 543.13258ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:44.022801  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.022890  791016 retry.go:31] will retry after 370.797414ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.394558  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:44.412159  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:44.447565  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.447595  791016 retry.go:31] will retry after 247.565285ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:44.731705  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.731739  791016 retry.go:31] will retry after 493.651528ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:45.262011  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:45.262044  791016 retry.go:31] will retry after 795.250603ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.093432  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.093527  791016 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.093543  791016 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.093596  791016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:26:46.093646  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:46.111002  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:46.146831  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.146864  791016 retry.go:31] will retry after 125.228986ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.308502  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.308548  791016 retry.go:31] will retry after 489.138767ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.834015  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.834048  791016 retry.go:31] will retry after 417.464824ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:47.288306  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:47.288416  791016 retry.go:31] will retry after 372.538514ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:47.661780  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:47.679654  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:47.714898  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:47.714965  791016 retry.go:31] will retry after 343.045789ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:48.093992  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:48.094028  791016 retry.go:31] will retry after 370.55891ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:48.500717  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:48.500754  791016 retry.go:31] will retry after 705.998326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243081  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243187  791016 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243205  791016 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:49.243211  791016 fix.go:56] duration metric: took 11m5.520925064s for fixHost
	I0917 00:26:49.243218  791016 start.go:83] releasing machines lock for "ha-198834-m04", held for 11m5.520957344s
	W0917 00:26:49.243238  791016 start.go:714] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243324  791016 out.go:285] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:49.243336  791016 start.go:729] Will try again in 5 seconds ...
	I0917 00:26:54.245406  791016 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:26:54.245544  791016 start.go:364] duration metric: took 79.986µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:26:54.245570  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:26:54.245586  791016 fix.go:54] fixHost starting: m04
	I0917 00:26:54.245870  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:26:54.265001  791016 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Running err=<nil>
	W0917 00:26:54.265028  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:26:54.267198  791016 out.go:252] * Updating the running docker "ha-198834-m04" container ...
	I0917 00:26:54.267265  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:26:54.267347  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:54.285375  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:26:54.285585  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:26:54.285596  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:26:54.321170  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:57.358493  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-198834 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-198834
helpers_test.go:243: (dbg) docker inspect ha-198834:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	        "Created": "2025-09-16T23:57:02.499662369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 791214,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:14:52.205875625Z",
	            "FinishedAt": "2025-09-17T00:14:51.500458368Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/hosts",
	        "LogPath": "/var/lib/docker/containers/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51/47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51-json.log",
	        "Name": "/ha-198834",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-198834:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-198834",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "47e5b1e4a4a54393d95d2fc54ba8e6df0394126726cb08c4999522c520900c51",
	                "LowerDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e23b044058f2d5382195c39b01075877743d56cb3b0f346df896a9277153245/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-198834",
	                "Source": "/var/lib/docker/volumes/ha-198834/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-198834",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-198834",
	                "name.minikube.sigs.k8s.io": "ha-198834",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30f0c2c8b99ab19c83ecad6b17ee5e3753dede7e4b721df61acf92893b317949",
	            "SandboxKey": "/var/run/docker/netns/30f0c2c8b99a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32842"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32841"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-198834": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:d7:11:94:de:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab651df73000b515d018703342371ce7de7a02a0092c0b9b72849c77d387bab3",
	                    "EndpointID": "f29453076ec9e7f71fd7625d1ccdd7b866c69f1bb1e907a6ed74fcb01f29bba2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-198834",
	                        "47e5b1e4a4a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198834 -n ha-198834
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 logs -n 25: (1.288590666s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-198834 cp ha-198834-m03:/home/docker/cp-test.txt ha-198834-m04:/home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test_ha-198834-m03_ha-198834-m04.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp testdata/cp-test.txt ha-198834-m04:/home/docker/cp-test.txt                                                           │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile37309842/001/cp-test_ha-198834-m04.txt │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834:/home/docker/cp-test_ha-198834-m04_ha-198834.txt                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834.txt                                               │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m02:/home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m02 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m02.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ cp      │ ha-198834 cp ha-198834-m04:/home/docker/cp-test.txt ha-198834-m03:/home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt             │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m04 sudo cat /home/docker/cp-test.txt                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ ssh     │ ha-198834 ssh -n ha-198834-m03 sudo cat /home/docker/cp-test_ha-198834-m04_ha-198834-m03.txt                                       │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │                     │
	│ node    │ ha-198834 node stop m02 --alsologtostderr -v 5                                                                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ node    │ ha-198834 node start m02 --alsologtostderr -v 5                                                                                    │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:05 UTC │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │                     │
	│ stop    │ ha-198834 stop --alsologtostderr -v 5                                                                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │ 17 Sep 25 00:06 UTC │
	│ start   │ ha-198834 start --wait true --alsologtostderr -v 5                                                                                 │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:06 UTC │                     │
	│ node    │ ha-198834 node list --alsologtostderr -v 5                                                                                         │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │                     │
	│ node    │ ha-198834 node delete m03 --alsologtostderr -v 5                                                                                   │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │ 17 Sep 25 00:14 UTC │
	│ stop    │ ha-198834 stop --alsologtostderr -v 5                                                                                              │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │ 17 Sep 25 00:14 UTC │
	│ start   │ ha-198834 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker                                     │ ha-198834 │ jenkins │ v1.37.0 │ 17 Sep 25 00:14 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:14:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:14:51.984510  791016 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:14:51.984623  791016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:51.984632  791016 out.go:374] Setting ErrFile to fd 2...
	I0917 00:14:51.984636  791016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:51.984851  791016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:14:51.985333  791016 out.go:368] Setting JSON to false
	I0917 00:14:51.986252  791016 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10624,"bootTime":1758057468,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:14:51.986348  791016 start.go:140] virtualization: kvm guest
	I0917 00:14:51.988580  791016 out.go:179] * [ha-198834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:14:51.990067  791016 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:14:51.990084  791016 notify.go:220] Checking for updates...
	I0917 00:14:51.993043  791016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:14:51.997525  791016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:14:51.998766  791016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:14:52.000046  791016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:14:52.001295  791016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:14:52.003279  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:14:52.004009  791016 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:14:52.027043  791016 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:14:52.027118  791016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:14:52.079584  791016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:14:52.069593968 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:14:52.079701  791016 docker.go:318] overlay module found
	I0917 00:14:52.081639  791016 out.go:179] * Using the docker driver based on existing profile
	I0917 00:14:52.082816  791016 start.go:304] selected driver: docker
	I0917 00:14:52.082830  791016 start.go:918] validating driver "docker" against &{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fa
lse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:14:52.083014  791016 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:14:52.083096  791016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:14:52.133926  791016 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-17 00:14:52.125055044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:14:52.134728  791016 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:14:52.134759  791016 cni.go:84] Creating CNI manager for ""
	I0917 00:14:52.134818  791016 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:14:52.134880  791016 start.go:348] cluster config:
	{Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvid
ia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:14:52.136778  791016 out.go:179] * Starting "ha-198834" primary control-plane node in "ha-198834" cluster
	I0917 00:14:52.138068  791016 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:14:52.139374  791016 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:14:52.140532  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:14:52.140567  791016 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:14:52.140577  791016 cache.go:58] Caching tarball of preloaded images
	I0917 00:14:52.140634  791016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:14:52.140682  791016 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:14:52.140695  791016 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:14:52.140810  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:14:52.161634  791016 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:14:52.161656  791016 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:14:52.161670  791016 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:14:52.161699  791016 start.go:360] acquireMachinesLock for ha-198834: {Name:mk72787ec2f43d39f6405224749d27e293a28eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:14:52.161757  791016 start.go:364] duration metric: took 40.027µs to acquireMachinesLock for "ha-198834"
	I0917 00:14:52.161775  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:14:52.161780  791016 fix.go:54] fixHost starting: 
	I0917 00:14:52.162001  791016 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:14:52.178501  791016 fix.go:112] recreateIfNeeded on ha-198834: state=Stopped err=<nil>
	W0917 00:14:52.178530  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:14:52.180503  791016 out.go:252] * Restarting existing docker container for "ha-198834" ...
	I0917 00:14:52.180565  791016 cli_runner.go:164] Run: docker start ha-198834
	I0917 00:14:52.416033  791016 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:14:52.434074  791016 kic.go:430] container "ha-198834" state is running.
	I0917 00:14:52.434534  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:14:52.452524  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:14:52.452733  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:14:52.452794  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:52.469982  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:52.470316  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:52.470337  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:14:52.470946  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43040->127.0.0.1:32838: read: connection reset by peer
	I0917 00:14:55.608979  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:14:55.609013  791016 ubuntu.go:182] provisioning hostname "ha-198834"
	I0917 00:14:55.609067  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:55.625994  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:55.626247  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:55.626266  791016 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834 && echo "ha-198834" | sudo tee /etc/hostname
	I0917 00:14:55.773773  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834
	
	I0917 00:14:55.773838  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:55.791030  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:55.791293  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:55.791317  791016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:14:55.926499  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:55.926537  791016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:14:55.926565  791016 ubuntu.go:190] setting up certificates
	I0917 00:14:55.926576  791016 provision.go:84] configureAuth start
	I0917 00:14:55.926624  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:14:55.943807  791016 provision.go:143] copyHostCerts
	I0917 00:14:55.943858  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:14:55.943889  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:14:55.943917  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:14:55.944003  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:14:55.944132  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:14:55.944164  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:14:55.944172  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:14:55.944218  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:14:55.944358  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:14:55.944389  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:14:55.944398  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:14:55.944447  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:14:55.944523  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834 san=[127.0.0.1 192.168.49.2 ha-198834 localhost minikube]
	I0917 00:14:55.998211  791016 provision.go:177] copyRemoteCerts
	I0917 00:14:55.998274  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:14:55.998311  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.015389  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.112506  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:14:56.112586  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:14:56.137125  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:14:56.137200  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:14:56.161381  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:14:56.161451  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:14:56.186672  791016 provision.go:87] duration metric: took 260.078934ms to configureAuth
	I0917 00:14:56.186707  791016 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:14:56.186953  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:14:56.187011  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.204384  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:56.204678  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:56.204693  791016 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:14:56.340595  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:14:56.340617  791016 ubuntu.go:71] root file system type: overlay
	I0917 00:14:56.340751  791016 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:14:56.340818  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.357831  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:56.358082  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:56.358152  791016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:14:56.507578  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:14:56.507700  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.524869  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:14:56.525110  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0917 00:14:56.525130  791016 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:14:56.666364  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:14:56.666388  791016 machine.go:96] duration metric: took 4.213639628s to provisionDockerMachine
	I0917 00:14:56.666402  791016 start.go:293] postStartSetup for "ha-198834" (driver="docker")
	I0917 00:14:56.666415  791016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:14:56.666485  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:14:56.666538  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.684227  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.780818  791016 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:14:56.784289  791016 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:14:56.784338  791016 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:14:56.784346  791016 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:14:56.784353  791016 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:14:56.784368  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:14:56.784415  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:14:56.784504  791016 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:14:56.784521  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:14:56.784604  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:14:56.793777  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:14:56.818622  791016 start.go:296] duration metric: took 152.204271ms for postStartSetup
	I0917 00:14:56.818715  791016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:14:56.818756  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.835972  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.927988  791016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:14:56.932365  791016 fix.go:56] duration metric: took 4.770576567s for fixHost
	I0917 00:14:56.932397  791016 start.go:83] releasing machines lock for "ha-198834", held for 4.770626502s
	I0917 00:14:56.932466  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834
	I0917 00:14:56.949411  791016 ssh_runner.go:195] Run: cat /version.json
	I0917 00:14:56.949449  791016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:14:56.949462  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.949546  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:14:56.967027  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:56.968031  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:14:57.127150  791016 ssh_runner.go:195] Run: systemctl --version
	I0917 00:14:57.132073  791016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:14:57.136597  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:14:57.155743  791016 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:14:57.155819  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:14:57.165195  791016 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:14:57.165233  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:14:57.165266  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:14:57.165384  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:14:57.182476  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:14:57.192824  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:14:57.203668  791016 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:14:57.203735  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:14:57.214504  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:14:57.225550  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:14:57.235825  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:14:57.246476  791016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:14:57.256782  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:14:57.267321  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:14:57.277778  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:14:57.288377  791016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:14:57.297185  791016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:14:57.305932  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:57.376667  791016 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:14:57.454312  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:14:57.454356  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:14:57.454399  791016 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:14:57.467289  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:14:57.478703  791016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:14:57.494132  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:14:57.505313  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:14:57.517213  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:14:57.534191  791016 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:14:57.537619  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:14:57.546337  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:14:57.564871  791016 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:14:57.632307  791016 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:14:57.699840  791016 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:14:57.699996  791016 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:14:57.718319  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:14:57.729368  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:57.799687  791016 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:14:58.627975  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:14:58.639659  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:14:58.651672  791016 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:14:58.663882  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:14:58.675231  791016 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:14:58.744466  791016 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:14:58.809772  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:58.874459  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:14:58.898119  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:14:58.909139  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:58.974481  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:14:59.053663  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:14:59.065692  791016 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:14:59.065760  791016 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:14:59.069878  791016 start.go:563] Will wait 60s for crictl version
	I0917 00:14:59.069957  791016 ssh_runner.go:195] Run: which crictl
	I0917 00:14:59.073583  791016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:14:59.107316  791016 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:14:59.107388  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:14:59.132627  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:14:59.159893  791016 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:14:59.159983  791016 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:14:59.175796  791016 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:14:59.179723  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:14:59.192006  791016 kubeadm.go:875] updating cluster {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:14:59.192142  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:14:59.192197  791016 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:14:59.213503  791016 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:14:59.213523  791016 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:14:59.213573  791016 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:14:59.235481  791016 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0917 00:14:59.235506  791016 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:14:59.235519  791016 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0917 00:14:59.235645  791016 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:14:59.235709  791016 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:14:59.287490  791016 cni.go:84] Creating CNI manager for ""
	I0917 00:14:59.287511  791016 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:14:59.287530  791016 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:14:59.287550  791016 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198834 NodeName:ha-198834 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:14:59.287669  791016 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-198834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:14:59.287686  791016 kube-vip.go:115] generating kube-vip config ...
	I0917 00:14:59.287725  791016 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:14:59.300724  791016 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:14:59.300820  791016 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:14:59.300869  791016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:14:59.310131  791016 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:14:59.310206  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:14:59.319198  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0917 00:14:59.338210  791016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:14:59.356160  791016 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0917 00:14:59.374135  791016 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:14:59.391922  791016 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:14:59.395394  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:14:59.406380  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:14:59.474578  791016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:14:59.496142  791016 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.2
	I0917 00:14:59.496164  791016 certs.go:194] generating shared ca certs ...
	I0917 00:14:59.496187  791016 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:14:59.496351  791016 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:14:59.496407  791016 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:14:59.496421  791016 certs.go:256] generating profile certs ...
	I0917 00:14:59.496539  791016 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:14:59.496580  791016 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082
	I0917 00:14:59.496599  791016 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:14:59.782546  791016 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082 ...
	I0917 00:14:59.782585  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082: {Name:mkd77e113eef8cc978e41c42a33e5d17dfff4d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:14:59.782773  791016 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082 ...
	I0917 00:14:59.782792  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082: {Name:mk3187b32897fcdd1c8f3a813b2dbb432ec29d5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:14:59.782949  791016 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt.9ff12082 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt
	I0917 00:14:59.783161  791016 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.9ff12082 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key
	I0917 00:14:59.783350  791016 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:14:59.783370  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:14:59.783385  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:14:59.783400  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:14:59.783419  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:14:59.783473  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:14:59.783497  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:14:59.783515  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:14:59.783532  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:14:59.783595  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:14:59.783637  791016 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:14:59.783650  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:14:59.783685  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:14:59.783712  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:14:59.783746  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:14:59.783801  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:14:59.783841  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:14:59.783863  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:14:59.783880  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:14:59.784428  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:14:59.812725  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:14:59.839361  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:14:59.863852  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:14:59.888859  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:14:59.912896  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:14:59.937052  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:14:59.961552  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:14:59.985795  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:15:00.010010  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:15:00.039777  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:15:00.073277  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:15:00.098053  791016 ssh_runner.go:195] Run: openssl version
	I0917 00:15:00.106012  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:15:00.121822  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:00.127559  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:00.127633  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:00.136603  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:15:00.147672  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:15:00.163310  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:15:00.170581  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:15:00.170659  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:15:00.181863  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:15:00.195819  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:15:00.211427  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:15:00.217353  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:15:00.217422  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:15:00.227363  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:15:00.241884  791016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:15:00.247297  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:15:00.255471  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:15:00.262948  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:15:00.271239  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:15:00.279755  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:15:00.288625  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:15:00.298707  791016 kubeadm.go:392] StartCluster: {Name:ha-198834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:15:00.298959  791016 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:15:00.331583  791016 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:15:00.344226  791016 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:15:00.344251  791016 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:15:00.344297  791016 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:15:00.356225  791016 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:15:00.356954  791016 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-198834" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:15:00.357167  791016 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-661878/kubeconfig needs updating (will repair): [kubeconfig missing "ha-198834" cluster setting kubeconfig missing "ha-198834" context setting]
	I0917 00:15:00.357537  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:00.358238  791016 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:15:00.358794  791016 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:15:00.358816  791016 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:15:00.358822  791016 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:15:00.358827  791016 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:15:00.358832  791016 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:15:00.358831  791016 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:15:00.359348  791016 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:15:00.374216  791016 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:15:00.374315  791016 kubeadm.go:593] duration metric: took 30.054163ms to restartPrimaryControlPlane
	I0917 00:15:00.374348  791016 kubeadm.go:394] duration metric: took 75.647083ms to StartCluster
	I0917 00:15:00.374405  791016 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:00.374563  791016 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:15:00.375597  791016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:00.376534  791016 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:15:00.376617  791016 start.go:241] waiting for startup goroutines ...
	I0917 00:15:00.376603  791016 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:15:00.376897  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:00.379385  791016 out.go:179] * Enabled addons: 
	I0917 00:15:00.381029  791016 addons.go:514] duration metric: took 4.421986ms for enable addons: enabled=[]
	I0917 00:15:00.381072  791016 start.go:246] waiting for cluster config update ...
	I0917 00:15:00.381084  791016 start.go:255] writing updated cluster config ...
	I0917 00:15:00.382978  791016 out.go:203] 
	I0917 00:15:00.384821  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:00.384920  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:00.386510  791016 out.go:179] * Starting "ha-198834-m02" control-plane node in "ha-198834" cluster
	I0917 00:15:00.388511  791016 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:15:00.389947  791016 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:15:00.391240  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:15:00.391267  791016 cache.go:58] Caching tarball of preloaded images
	I0917 00:15:00.391331  791016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:15:00.391366  791016 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:15:00.391376  791016 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:15:00.391505  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:00.417091  791016 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:15:00.417116  791016 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:15:00.417137  791016 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:15:00.417168  791016 start.go:360] acquireMachinesLock for ha-198834-m02: {Name:mka26d69ac2a19118f71b5186fd38cc3e669de2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:15:00.417240  791016 start.go:364] duration metric: took 48.032µs to acquireMachinesLock for "ha-198834-m02"
	I0917 00:15:00.417265  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:15:00.417271  791016 fix.go:54] fixHost starting: m02
	I0917 00:15:00.417572  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:15:00.441100  791016 fix.go:112] recreateIfNeeded on ha-198834-m02: state=Stopped err=<nil>
	W0917 00:15:00.441136  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:15:00.443247  791016 out.go:252] * Restarting existing docker container for "ha-198834-m02" ...
	I0917 00:15:00.443344  791016 cli_runner.go:164] Run: docker start ha-198834-m02
	I0917 00:15:00.796646  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:15:00.817426  791016 kic.go:430] container "ha-198834-m02" state is running.
	I0917 00:15:00.817876  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:15:00.836189  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:00.836456  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:15:00.836532  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:00.857658  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:00.857866  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:00.857878  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:15:00.858657  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54150->127.0.0.1:32843: read: connection reset by peer
	I0917 00:15:04.012250  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:15:04.012285  791016 ubuntu.go:182] provisioning hostname "ha-198834-m02"
	I0917 00:15:04.012348  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:04.035386  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:04.035689  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:04.035711  791016 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m02 && echo "ha-198834-m02" | sudo tee /etc/hostname
	I0917 00:15:04.199294  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198834-m02
	
	I0917 00:15:04.199371  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:04.217053  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:04.217272  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:04.217296  791016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:15:04.358579  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:15:04.358614  791016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:15:04.358640  791016 ubuntu.go:190] setting up certificates
	I0917 00:15:04.358657  791016 provision.go:84] configureAuth start
	I0917 00:15:04.358716  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:15:04.385667  791016 provision.go:143] copyHostCerts
	I0917 00:15:04.385715  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:15:04.385758  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:15:04.385770  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:15:04.385862  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:15:04.385993  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:15:04.386033  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:15:04.386044  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:15:04.386087  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:15:04.386159  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:15:04.386195  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:15:04.386205  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:15:04.386246  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:15:04.386320  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m02 san=[127.0.0.1 192.168.49.3 ha-198834-m02 localhost minikube]
	I0917 00:15:04.984661  791016 provision.go:177] copyRemoteCerts
	I0917 00:15:04.984745  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:15:04.984793  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.012125  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:05.136809  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:15:05.136881  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:15:05.176029  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:15:05.176109  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:15:05.218675  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:15:05.218751  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:15:05.261970  791016 provision.go:87] duration metric: took 903.291872ms to configureAuth
	I0917 00:15:05.262072  791016 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:15:05.262363  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:05.262473  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.286994  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:05.287695  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:05.287713  791016 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:15:05.457610  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:15:05.457635  791016 ubuntu.go:71] root file system type: overlay
	I0917 00:15:05.457785  791016 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:15:05.457847  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.485000  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:05.485376  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:05.485488  791016 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:15:05.653320  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:15:05.653408  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.675057  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:05.675286  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0917 00:15:05.675303  791016 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:15:05.827491  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:15:05.827539  791016 machine.go:96] duration metric: took 4.991067416s to provisionDockerMachine
	I0917 00:15:05.827554  791016 start.go:293] postStartSetup for "ha-198834-m02" (driver="docker")
	I0917 00:15:05.827568  791016 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:15:05.827632  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:15:05.827683  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:05.852368  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:05.969426  791016 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:15:05.974657  791016 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:15:05.974700  791016 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:15:05.974711  791016 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:15:05.974720  791016 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:15:05.974735  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:15:05.974814  791016 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:15:05.974922  791016 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:15:05.974933  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /etc/ssl/certs/6653992.pem
	I0917 00:15:05.975061  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:15:05.987682  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:15:06.014751  791016 start.go:296] duration metric: took 187.179003ms for postStartSetup
	I0917 00:15:06.014841  791016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:15:06.014888  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:06.032229  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:06.127487  791016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:15:06.132280  791016 fix.go:56] duration metric: took 5.715000153s for fixHost
	I0917 00:15:06.132307  791016 start.go:83] releasing machines lock for "ha-198834-m02", held for 5.715051279s
	I0917 00:15:06.132377  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m02
	I0917 00:15:06.151141  791016 out.go:179] * Found network options:
	I0917 00:15:06.152404  791016 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:15:06.153500  791016 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:15:06.153560  791016 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:15:06.153646  791016 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:15:06.153693  791016 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:15:06.153703  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:06.153775  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m02
	I0917 00:15:06.172919  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:06.173221  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m02/id_rsa Username:docker}
	I0917 00:15:06.336059  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:15:06.356573  791016 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:15:06.356644  791016 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:15:06.367823  791016 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:15:06.367850  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:15:06.367879  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:15:06.368011  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:15:06.384865  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:15:06.395196  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:15:06.405823  791016 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:15:06.405897  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:15:06.416855  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:15:06.427229  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:15:06.437724  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:15:06.448158  791016 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:15:06.457924  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:15:06.469234  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:15:06.479584  791016 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:15:06.490186  791016 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:15:06.499478  791016 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:15:06.508117  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:06.649341  791016 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:15:06.856941  791016 start.go:495] detecting cgroup driver to use...
	I0917 00:15:06.857001  791016 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:15:06.857054  791016 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:15:06.871023  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:15:06.883130  791016 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:15:06.904414  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:15:06.917467  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:15:06.929880  791016 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:15:06.948179  791016 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:15:06.951952  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:15:06.962102  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:15:06.986553  791016 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:15:07.111853  791016 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:15:07.255138  791016 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:15:07.255189  791016 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:15:07.279181  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:15:07.291464  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:07.423129  791016 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:15:33.539124  791016 ssh_runner.go:235] Completed: sudo systemctl restart docker: (26.115947477s)
	I0917 00:15:33.539215  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:15:33.556589  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:15:33.573719  791016 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:15:33.598265  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:15:33.613037  791016 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:15:33.716455  791016 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:15:33.842930  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:33.991506  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:15:34.022454  791016 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:15:34.043666  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:34.183399  791016 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:15:34.334172  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:15:34.353300  791016 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:15:34.353475  791016 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:15:34.359509  791016 start.go:563] Will wait 60s for crictl version
	I0917 00:15:34.359581  791016 ssh_runner.go:195] Run: which crictl
	I0917 00:15:34.364611  791016 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:15:34.433146  791016 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:15:34.433245  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:15:34.476733  791016 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:15:34.524154  791016 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:15:34.526057  791016 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:15:34.527467  791016 cli_runner.go:164] Run: docker network inspect ha-198834 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:15:34.552695  791016 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:15:34.559052  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:15:34.580385  791016 mustload.go:65] Loading cluster: ha-198834
	I0917 00:15:34.580685  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:34.581036  791016 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:15:34.616278  791016 host.go:66] Checking if "ha-198834" exists ...
	I0917 00:15:34.616686  791016 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834 for IP: 192.168.49.3
	I0917 00:15:34.616764  791016 certs.go:194] generating shared ca certs ...
	I0917 00:15:34.616799  791016 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:15:34.617054  791016 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:15:34.617163  791016 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:15:34.617177  791016 certs.go:256] generating profile certs ...
	I0917 00:15:34.617315  791016 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key
	I0917 00:15:34.617397  791016 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key.3ea539d4
	I0917 00:15:34.617455  791016 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key
	I0917 00:15:34.617468  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:15:34.617485  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:15:34.617500  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:15:34.617515  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:15:34.617528  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:15:34.617543  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:15:34.617557  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:15:34.617570  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:15:34.617643  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:15:34.617688  791016 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:15:34.617699  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:15:34.617730  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:15:34.617774  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:15:34.617801  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:15:34.617876  791016 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:15:34.617935  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem -> /usr/share/ca-certificates/665399.pem
	I0917 00:15:34.617957  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> /usr/share/ca-certificates/6653992.pem
	I0917 00:15:34.617984  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:34.618057  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834
	I0917 00:15:34.644031  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834/id_rsa Username:docker}
	I0917 00:15:34.759250  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:15:34.766794  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:15:34.802705  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:15:34.814466  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0917 00:15:34.841530  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:15:34.848280  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:15:34.882344  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:15:34.894700  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0917 00:15:34.916869  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:15:34.923475  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:15:34.959856  791016 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:15:34.971121  791016 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:15:35.007770  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:15:35.090564  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:15:35.159352  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:15:35.215888  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:15:35.283778  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:15:35.346216  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:15:35.406460  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:15:35.478425  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:15:35.547961  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:15:35.625362  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:15:35.679806  791016 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:15:35.729740  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:15:35.769544  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0917 00:15:35.819313  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:15:35.859183  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0917 00:15:35.904607  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:15:35.943073  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:15:36.017602  791016 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:15:36.071338  791016 ssh_runner.go:195] Run: openssl version
	I0917 00:15:36.087808  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:15:36.119634  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:15:36.125999  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:15:36.126077  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:15:36.139804  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:15:36.169274  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:15:36.194768  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:15:36.203260  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:15:36.203337  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:15:36.220717  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:15:36.239759  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:15:36.273783  791016 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:36.284643  791016 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:36.284717  791016 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:15:36.303099  791016 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:15:36.318748  791016 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:15:36.325645  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:15:36.339417  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:15:36.349966  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:15:36.364947  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:15:36.376365  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:15:36.392566  791016 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:15:36.405182  791016 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0917 00:15:36.405340  791016 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198834-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-198834 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:15:36.405377  791016 kube-vip.go:115] generating kube-vip config ...
	I0917 00:15:36.405560  791016 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:15:36.427797  791016 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:15:36.427973  791016 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:15:36.428079  791016 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:15:36.445942  791016 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:15:36.446081  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:15:36.463378  791016 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:15:36.495762  791016 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:15:36.552099  791016 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:15:36.585385  791016 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:15:36.593638  791016 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:15:36.613847  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:36.850964  791016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:15:36.870862  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:36.870544  791016 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:15:36.875616  791016 out.go:179] * Verifying Kubernetes components...
	I0917 00:15:36.876865  791016 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:15:37.094569  791016 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:15:37.164528  791016 kapi.go:59] client config for ha-198834: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/client.key", CAFile:"/home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:15:37.164638  791016 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:15:37.165011  791016 node_ready.go:35] waiting up to 6m0s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:15:43.121636  791016 node_ready.go:49] node "ha-198834-m02" is "Ready"
	I0917 00:15:43.121673  791016 node_ready.go:38] duration metric: took 5.95663549s for node "ha-198834-m02" to be "Ready" ...
	I0917 00:15:43.121697  791016 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:15:43.121757  791016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:15:43.622856  791016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:15:43.635792  791016 api_server.go:72] duration metric: took 6.764839609s to wait for apiserver process to appear ...
	I0917 00:15:43.635819  791016 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:15:43.635843  791016 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:15:43.641470  791016 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:15:43.642433  791016 api_server.go:141] control plane version: v1.34.0
	I0917 00:15:43.642459  791016 api_server.go:131] duration metric: took 6.632404ms to wait for apiserver health ...
	I0917 00:15:43.642471  791016 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:15:43.648490  791016 system_pods.go:59] 24 kube-system pods found
	I0917 00:15:43.648534  791016 system_pods.go:61] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:15:43.648540  791016 system_pods.go:61] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:15:43.648546  791016 system_pods.go:61] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:15:43.648549  791016 system_pods.go:61] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:15:43.648552  791016 system_pods.go:61] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:15:43.648555  791016 system_pods.go:61] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:15:43.648558  791016 system_pods.go:61] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:15:43.648562  791016 system_pods.go:61] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:15:43.648566  791016 system_pods.go:61] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:15:43.648571  791016 system_pods.go:61] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:15:43.648575  791016 system_pods.go:61] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:15:43.648580  791016 system_pods.go:61] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:15:43.648585  791016 system_pods.go:61] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:15:43.648590  791016 system_pods.go:61] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:15:43.648594  791016 system_pods.go:61] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:15:43.648598  791016 system_pods.go:61] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:15:43.648603  791016 system_pods.go:61] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0917 00:15:43.648607  791016 system_pods.go:61] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:15:43.648612  791016 system_pods.go:61] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:15:43.648616  791016 system_pods.go:61] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:15:43.648624  791016 system_pods.go:61] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:15:43.648629  791016 system_pods.go:61] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:15:43.648634  791016 system_pods.go:61] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:15:43.648637  791016 system_pods.go:61] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:15:43.648642  791016 system_pods.go:74] duration metric: took 6.165191ms to wait for pod list to return data ...
	I0917 00:15:43.648650  791016 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:15:43.651851  791016 default_sa.go:45] found service account: "default"
	I0917 00:15:43.651875  791016 default_sa.go:55] duration metric: took 3.218931ms for default service account to be created ...
	I0917 00:15:43.651887  791016 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:15:43.666854  791016 system_pods.go:86] 24 kube-system pods found
	I0917 00:15:43.666888  791016 system_pods.go:89] "coredns-66bc5c9577-5wx4k" [6f279fd8-dd3c-49a5-863d-a53124ecf1f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:15:43.666897  791016 system_pods.go:89] "coredns-66bc5c9577-mjbz6" [c918625f-be11-44bf-8b82-d4c21b8993d1] Running
	I0917 00:15:43.666919  791016 system_pods.go:89] "etcd-ha-198834" [8374ebf7-cb1d-422e-8768-584e07b2dcab] Running
	I0917 00:15:43.666925  791016 system_pods.go:89] "etcd-ha-198834-m02" [222eaa2a-824e-4087-a614-f5f5a6de8e98] Running
	I0917 00:15:43.666930  791016 system_pods.go:89] "etcd-ha-198834-m03" [07a1b36a-f633-4f93-a8c2-1bc7bc4ce072] Running
	I0917 00:15:43.666935  791016 system_pods.go:89] "kindnet-2vbn5" [acd8be88-6ee7-4832-830f-c98aaabacd81] Running
	I0917 00:15:43.666940  791016 system_pods.go:89] "kindnet-67fn9" [69cae545-8970-4b63-8dfa-6c201205e9dd] Running
	I0917 00:15:43.666945  791016 system_pods.go:89] "kindnet-h28vp" [6c51d39f-7e43-461b-a021-13ddf0cb9845] Running
	I0917 00:15:43.666951  791016 system_pods.go:89] "kube-apiserver-ha-198834" [5176645e-1819-4ab4-add3-9355f9a506ce] Running
	I0917 00:15:43.666956  791016 system_pods.go:89] "kube-apiserver-ha-198834-m02" [a2c8eefd-3b40-484d-9939-74b5fdba7182] Running
	I0917 00:15:43.666961  791016 system_pods.go:89] "kube-apiserver-ha-198834-m03" [6b3daabc-2aec-427f-8ee1-b89cc599cfe1] Running
	I0917 00:15:43.666967  791016 system_pods.go:89] "kube-controller-manager-ha-198834" [36327629-7bc1-440d-b760-3fdf88af1b03] Running
	I0917 00:15:43.666972  791016 system_pods.go:89] "kube-controller-manager-ha-198834-m02" [434a65bb-a306-4798-81f9-9631313ba763] Running
	I0917 00:15:43.666977  791016 system_pods.go:89] "kube-controller-manager-ha-198834-m03" [bb6c5982-6f3f-4ac2-ad73-2044b6b73019] Running
	I0917 00:15:43.666983  791016 system_pods.go:89] "kube-proxy-5tkhn" [5edbfebe-2590-4d23-b80e-7496a4e9a5b6] Running
	I0917 00:15:43.666987  791016 system_pods.go:89] "kube-proxy-d8brp" [00263ada-ca4e-4585-b712-19f6e60ce72b] Running
	I0917 00:15:43.666992  791016 system_pods.go:89] "kube-proxy-h2fxd" [db1b17f7-7be8-46ef-8eb3-98432a2eec18] Running
	I0917 00:15:43.666997  791016 system_pods.go:89] "kube-scheduler-ha-198834" [45afa1e0-273e-44fc-b170-bdc7a365273e] Running
	I0917 00:15:43.667001  791016 system_pods.go:89] "kube-scheduler-ha-198834-m02" [633b0b32-1d8b-4301-85a5-8c36f53296e3] Running
	I0917 00:15:43.667005  791016 system_pods.go:89] "kube-scheduler-ha-198834-m03" [bc9f09c0-2af3-4108-b0d5-116e3d07d4b6] Running
	I0917 00:15:43.667011  791016 system_pods.go:89] "kube-vip-ha-198834" [be799fc4-6fc9-4e6d-9f48-b72ada1acf92] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0917 00:15:43.667016  791016 system_pods.go:89] "kube-vip-ha-198834-m02" [a9e3b24d-529d-409c-982f-c72bd0cc4693] Running
	I0917 00:15:43.667022  791016 system_pods.go:89] "kube-vip-ha-198834-m03" [608ad5d9-c8f7-4a62-a1f3-8cdac07ca388] Running
	I0917 00:15:43.667026  791016 system_pods.go:89] "storage-provisioner" [6b6f64f3-2647-4e13-be41-47fcc6111f3e] Running
	I0917 00:15:43.667035  791016 system_pods.go:126] duration metric: took 15.140554ms to wait for k8s-apps to be running ...
	I0917 00:15:43.667044  791016 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:15:43.667101  791016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:15:43.683924  791016 system_svc.go:56] duration metric: took 16.85635ms WaitForService to wait for kubelet
	I0917 00:15:43.683963  791016 kubeadm.go:578] duration metric: took 6.813012204s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:15:43.683983  791016 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:15:43.687655  791016 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:15:43.687692  791016 node_conditions.go:123] node cpu capacity is 8
	I0917 00:15:43.687704  791016 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:15:43.687707  791016 node_conditions.go:123] node cpu capacity is 8
	I0917 00:15:43.687711  791016 node_conditions.go:105] duration metric: took 3.724125ms to run NodePressure ...
	I0917 00:15:43.687726  791016 start.go:241] waiting for startup goroutines ...
	I0917 00:15:43.687757  791016 start.go:255] writing updated cluster config ...
	I0917 00:15:43.692409  791016 out.go:203] 
	I0917 00:15:43.693954  791016 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:15:43.694046  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:43.695802  791016 out.go:179] * Starting "ha-198834-m04" worker node in "ha-198834" cluster
	I0917 00:15:43.697154  791016 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:15:43.698391  791016 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:15:43.699490  791016 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:15:43.699515  791016 cache.go:58] Caching tarball of preloaded images
	I0917 00:15:43.699526  791016 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:15:43.699611  791016 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:15:43.699626  791016 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:15:43.699718  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:43.722121  791016 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:15:43.722140  791016 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:15:43.722155  791016 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:15:43.722183  791016 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:15:43.722250  791016 start.go:364] duration metric: took 48.37µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:15:43.722270  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:15:43.722287  791016 fix.go:54] fixHost starting: m04
	I0917 00:15:43.722532  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:15:43.742782  791016 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Stopped err=<nil>
	W0917 00:15:43.742808  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:15:43.744738  791016 out.go:252] * Restarting existing docker container for "ha-198834-m04" ...
	I0917 00:15:43.744819  791016 cli_runner.go:164] Run: docker start ha-198834-m04
	I0917 00:15:44.000115  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:15:44.021055  791016 kic.go:430] container "ha-198834-m04" state is running.
	I0917 00:15:44.021486  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:15:44.042936  791016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/ha-198834/config.json ...
	I0917 00:15:44.043250  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:15:44.043333  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:15:44.062132  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:15:44.062387  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:15:44.062402  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:15:44.063000  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56636->127.0.0.1:32848: read: connection reset by peer
	I0917 00:15:47.098738  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:50.134044  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:53.170618  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:56.207829  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:15:59.244776  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:02.280526  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:05.317837  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:08.354212  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:11.390558  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:14.427425  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:17.463898  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:20.500642  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:23.536486  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:26.573742  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:29.610330  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:32.646092  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:35.682701  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:38.718396  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:41.756896  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:44.793218  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:47.828895  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:50.865320  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:53.903006  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:56.940127  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:16:59.977576  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:03.014141  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:06.050453  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:09.088080  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:12.123520  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:15.160855  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:18.197516  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:21.233522  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:24.273142  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:27.309508  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:30.347190  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:33.383802  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:36.420366  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:39.458309  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:42.494883  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:45.532591  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:48.569257  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:51.605240  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:54.643463  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:17:57.679270  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:00.715117  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:03.753429  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:06.791234  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:09.828293  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:12.864420  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:15.900178  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:18.937530  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:21.973106  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:25.010703  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:28.047334  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:31.083791  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:34.120370  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:37.157229  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:40.192699  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:43.229282  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:46.230993  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:18:46.231045  791016 ubuntu.go:182] provisioning hostname "ha-198834-m04"
	I0917 00:18:46.231137  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:18:46.249452  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:18:46.249758  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:18:46.249779  791016 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198834-m04 && echo "ha-198834-m04" | sudo tee /etc/hostname
	I0917 00:18:46.285458  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:49.322790  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:52.358771  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:55.395146  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:18:58.430990  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:01.466240  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:04.502702  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:07.538712  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:10.575350  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:13.611556  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:16.650467  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:19.687931  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:22.724753  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:25.762172  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:28.799003  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:31.836666  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:34.874298  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:37.910869  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:40.948145  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:43.984159  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:47.022839  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:50.060638  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:53.097103  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:56.134937  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:19:59.171175  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:02.206435  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:05.245050  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:08.281228  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:11.317768  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:14.354163  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:17.390637  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:20.427145  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:23.463915  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:26.501454  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:29.538076  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:32.574385  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:35.609462  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:38.646664  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:41.683135  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:44.720530  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:47.756951  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:50.793122  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:53.830325  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:56.867726  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:20:59.905155  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:02.940607  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:05.978211  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:09.016132  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:12.054483  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:15.091784  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:18.126886  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:21.163483  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:24.200471  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:27.237187  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:30.274503  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:33.310590  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:36.350543  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:39.386298  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:42.422623  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:45.460302  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:48.461450  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:21:48.461548  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:21:48.480778  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:21:48.481107  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:21:48.481140  791016 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198834-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198834-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198834-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:21:48.516298  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:51.552071  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:54.590469  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:21:57.628670  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:00.664680  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:03.701437  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:06.739386  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:09.777268  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:12.813739  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:15.850785  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:18.887530  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:21.924310  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:24.959723  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:27.995646  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:31.031835  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:34.069898  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:37.106615  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:40.143875  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:43.182368  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:46.219887  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:49.255824  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:52.291827  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:55.329526  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:22:58.365554  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:01.402035  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:04.439392  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:07.475470  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:10.512193  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:13.547889  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:16.587547  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:19.625956  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:22.661476  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:25.699040  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:28.736086  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:31.772368  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:34.810879  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:37.846288  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:40.881735  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:43.918521  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:46.956842  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:49.994277  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:53.030898  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:56.068948  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:23:59.105416  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:02.141467  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:05.178642  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:08.214731  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:11.250406  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:14.288478  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:17.324824  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:20.360203  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:23.397681  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:26.434358  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:29.471836  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:32.509568  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:35.551457  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:38.589282  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:41.626303  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:44.664281  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:47.700228  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:50.700391  791016 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:24:50.700433  791016 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:24:50.700458  791016 ubuntu.go:190] setting up certificates
	I0917 00:24:50.700479  791016 provision.go:84] configureAuth start
	I0917 00:24:50.700558  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:50.720896  791016 provision.go:143] copyHostCerts
	I0917 00:24:50.720966  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:50.721019  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:50.721032  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:50.721163  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:50.721268  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:50.721289  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:50.721297  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:50.721328  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:50.721378  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:50.721395  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:50.721401  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:50.721423  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:50.721478  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:24:51.285190  791016 provision.go:177] copyRemoteCerts
	I0917 00:24:51.285253  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:24:51.285291  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:51.303249  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:51.338957  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:51.339061  791016 retry.go:31] will retry after 281.845457ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:51.657087  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:51.657127  791016 retry.go:31] will retry after 421.456805ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:52.114872  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.114918  791016 retry.go:31] will retry after 612.457307ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:52.764688  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.764777  791016 retry.go:31] will retry after 197.965238ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:52.963134  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:52.980867  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:53.017119  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:53.017161  791016 retry.go:31] will retry after 209.678413ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:53.263937  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:53.263979  791016 retry.go:31] will retry after 526.783878ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:53.827994  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:53.828035  791016 retry.go:31] will retry after 448.495953ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:54.313198  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.313318  791016 provision.go:87] duration metric: took 3.61282574s to configureAuth
	W0917 00:24:54.313335  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.313352  791016 retry.go:31] will retry after 103.827µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.314491  791016 provision.go:84] configureAuth start
	I0917 00:24:54.314577  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:54.333347  791016 provision.go:143] copyHostCerts
	I0917 00:24:54.333388  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:54.333417  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:54.333426  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:54.333484  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:54.333574  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:54.333593  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:54.333600  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:54.333622  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:54.333681  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:54.333697  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:54.333704  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:54.333722  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:54.333786  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:24:54.370138  791016 provision.go:177] copyRemoteCerts
	I0917 00:24:54.370200  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:24:54.370238  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:54.388376  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:54.425543  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.425585  791016 retry.go:31] will retry after 355.710441ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:54.818272  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:54.818309  791016 retry.go:31] will retry after 403.920682ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:55.258709  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:55.258740  791016 retry.go:31] will retry after 317.009231ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:55.612188  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:55.612218  791016 retry.go:31] will retry after 534.541777ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:56.182325  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.182427  791016 provision.go:87] duration metric: took 1.867913877s to configureAuth
	W0917 00:24:56.182442  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.182458  791016 retry.go:31] will retry after 201.235µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.183625  791016 provision.go:84] configureAuth start
	I0917 00:24:56.183706  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:56.200713  791016 provision.go:143] copyHostCerts
	I0917 00:24:56.200751  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:56.200784  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:56.200793  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:56.200853  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:56.200980  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:56.201007  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:56.201015  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:56.201041  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:56.201111  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:56.201127  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:56.201154  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:56.201176  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:56.201242  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:24:56.655526  791016 provision.go:177] copyRemoteCerts
	I0917 00:24:56.655593  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:24:56.655639  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:56.674005  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:56.709386  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.709416  791016 retry.go:31] will retry after 171.720164ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:56.917745  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:56.917783  791016 retry.go:31] will retry after 188.501002ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:57.143392  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:57.143424  791016 retry.go:31] will retry after 332.534047ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:57.512832  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:57.512864  791016 retry.go:31] will retry after 695.94873ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:58.244519  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.244600  791016 retry.go:31] will retry after 170.395165ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.416061  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:24:58.433852  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:24:58.469453  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.469485  791016 retry.go:31] will retry after 314.919798ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:58.820406  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:58.820439  791016 retry.go:31] will retry after 336.705475ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:59.193385  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.193414  791016 retry.go:31] will retry after 746.821803ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:24:59.978043  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.978147  791016 provision.go:87] duration metric: took 3.794501473s to configureAuth
	W0917 00:24:59.978178  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.978194  791016 retry.go:31] will retry after 180.554µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:24:59.979340  791016 provision.go:84] configureAuth start
	I0917 00:24:59.979415  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:24:59.996428  791016 provision.go:143] copyHostCerts
	I0917 00:24:59.996473  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:59.996506  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:24:59.996516  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:24:59.996573  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:24:59.996662  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:59.996691  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:24:59.996697  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:24:59.996722  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:24:59.996780  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:59.996797  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:24:59.996800  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:24:59.996818  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:24:59.996881  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:00.210182  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:00.210246  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:00.210288  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:00.228967  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:00.264547  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:00.264579  791016 retry.go:31] will retry after 249.421954ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:00.550784  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:00.550815  791016 retry.go:31] will retry after 446.501241ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:01.033319  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.033351  791016 retry.go:31] will retry after 322.057737ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:01.391567  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.391664  791016 retry.go:31] will retry after 258.753859ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.651131  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:01.668829  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:01.705294  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.705328  791016 retry.go:31] will retry after 147.654759ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:01.889504  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:01.889556  791016 retry.go:31] will retry after 321.004527ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:02.247226  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:02.247256  791016 retry.go:31] will retry after 286.119197ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:02.568952  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:02.568987  791016 retry.go:31] will retry after 792.931835ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:03.398850  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.398968  791016 provision.go:87] duration metric: took 3.419601273s to configureAuth
	W0917 00:25:03.398980  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.398992  791016 retry.go:31] will retry after 275.728µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.400129  791016 provision.go:84] configureAuth start
	I0917 00:25:03.400208  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:03.417859  791016 provision.go:143] copyHostCerts
	I0917 00:25:03.417895  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:03.417951  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:03.417970  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:03.418027  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:03.418116  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:03.418139  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:03.418145  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:03.418169  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:03.418230  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:03.418248  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:03.418252  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:03.418270  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:03.418334  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:03.710212  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:03.710280  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:03.710316  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:03.729281  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:03.765486  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:03.765523  791016 retry.go:31] will retry after 234.355448ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:04.037830  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:04.037865  791016 retry.go:31] will retry after 202.71283ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:04.277687  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:04.277721  791016 retry.go:31] will retry after 699.043005ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:05.012602  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.012635  791016 retry.go:31] will retry after 683.45052ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:05.732161  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.732267  791016 provision.go:87] duration metric: took 2.332116129s to configureAuth
	W0917 00:25:05.732281  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.732313  791016 retry.go:31] will retry after 408.117µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:05.733412  791016 provision.go:84] configureAuth start
	I0917 00:25:05.733507  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:05.751337  791016 provision.go:143] copyHostCerts
	I0917 00:25:05.751373  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:05.751404  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:05.751425  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:05.751483  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:05.751611  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:05.751634  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:05.751639  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:05.751673  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:05.751745  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:05.751763  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:05.751767  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:05.751788  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:05.751854  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:06.013451  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:06.013524  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:06.013572  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:06.031320  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:06.068487  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:06.068521  791016 retry.go:31] will retry after 199.995997ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:06.304427  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:06.304462  791016 retry.go:31] will retry after 428.334269ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:06.768652  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:06.768689  791016 retry.go:31] will retry after 282.250622ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:07.088533  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.088624  791016 retry.go:31] will retry after 130.195743ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.219926  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:07.237696  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:07.273350  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.273381  791016 retry.go:31] will retry after 332.263248ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:07.641362  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.641394  791016 retry.go:31] will retry after 219.825801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:07.897344  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:07.897384  791016 retry.go:31] will retry after 289.760844ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:08.223698  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:08.223734  791016 retry.go:31] will retry after 931.250784ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:09.191398  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.191617  791016 provision.go:87] duration metric: took 3.458158315s to configureAuth
	W0917 00:25:09.191645  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.191665  791016 retry.go:31] will retry after 486.462µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.192804  791016 provision.go:84] configureAuth start
	I0917 00:25:09.192898  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:09.210285  791016 provision.go:143] copyHostCerts
	I0917 00:25:09.210330  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:09.210368  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:09.210381  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:09.210454  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:09.210575  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:09.210607  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:09.210615  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:09.210655  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:09.210738  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:09.210761  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:09.210767  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:09.210798  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:09.210888  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:09.663367  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:09.663424  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:09.663472  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:09.683494  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:09.719268  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.719298  791016 retry.go:31] will retry after 199.262805ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:09.953459  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:09.953491  791016 retry.go:31] will retry after 204.479137ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:10.194710  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:10.194766  791016 retry.go:31] will retry after 758.559532ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:10.989359  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:10.989439  791016 retry.go:31] will retry after 370.221733ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:11.360025  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:11.377052  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:11.412480  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:11.412512  791016 retry.go:31] will retry after 329.383966ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:11.777745  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:11.777791  791016 retry.go:31] will retry after 269.690913ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:12.083866  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:12.083920  791016 retry.go:31] will retry after 572.239384ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:12.694586  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:12.694623  791016 retry.go:31] will retry after 464.05197ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:13.195486  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.195578  791016 provision.go:87] duration metric: took 4.002742375s to configureAuth
	W0917 00:25:13.195591  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.195605  791016 retry.go:31] will retry after 780.686µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.196735  791016 provision.go:84] configureAuth start
	I0917 00:25:13.196827  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:13.214509  791016 provision.go:143] copyHostCerts
	I0917 00:25:13.214547  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:13.214583  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:13.214596  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:13.214653  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:13.214774  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:13.214804  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:13.214814  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:13.214855  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:13.214977  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:13.215005  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:13.215015  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:13.215051  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:13.215146  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:13.649513  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:13.649599  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:13.649651  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:13.667345  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:13.703380  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:13.703405  791016 retry.go:31] will retry after 313.64163ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:14.053145  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:14.053179  791016 retry.go:31] will retry after 317.387612ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:14.406606  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:14.406635  791016 retry.go:31] will retry after 566.64859ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:15.009997  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.010090  791016 retry.go:31] will retry after 196.134619ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.206496  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:15.225650  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:15.261454  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.261483  791016 retry.go:31] will retry after 245.022682ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:15.541833  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.541868  791016 retry.go:31] will retry after 322.443288ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:15.900997  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:15.901029  791016 retry.go:31] will retry after 516.015576ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:16.453598  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.453713  791016 provision.go:87] duration metric: took 3.256958214s to configureAuth
	W0917 00:25:16.453726  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.453747  791016 retry.go:31] will retry after 2.333678ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.456970  791016 provision.go:84] configureAuth start
	I0917 00:25:16.457049  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:16.474811  791016 provision.go:143] copyHostCerts
	I0917 00:25:16.474850  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:16.474886  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:16.474898  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:16.474982  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:16.475066  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:16.475089  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:16.475094  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:16.475116  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:16.475200  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:16.475229  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:16.475235  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:16.475255  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:16.475307  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:16.799509  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:16.799573  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:16.799610  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:16.817674  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:16.853071  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:16.853103  791016 retry.go:31] will retry after 168.122328ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:17.056441  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:17.056479  791016 retry.go:31] will retry after 382.833105ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:17.475972  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:17.476010  791016 retry.go:31] will retry after 655.886733ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:18.168049  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.168130  791016 retry.go:31] will retry after 198.307554ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.367629  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:18.385594  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:18.421176  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.421209  791016 retry.go:31] will retry after 338.713182ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:18.796178  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:18.796221  791016 retry.go:31] will retry after 259.124236ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:19.090799  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.090830  791016 retry.go:31] will retry after 349.555843ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:19.476895  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.477033  791016 provision.go:87] duration metric: took 3.020038692s to configureAuth
	W0917 00:25:19.477045  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.477057  791016 retry.go:31] will retry after 2.091895ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.479197  791016 provision.go:84] configureAuth start
	I0917 00:25:19.479286  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:19.496629  791016 provision.go:143] copyHostCerts
	I0917 00:25:19.496665  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:19.496694  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:19.496703  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:19.496759  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:19.496860  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:19.496891  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:19.496902  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:19.496961  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:19.497023  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:19.497040  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:19.497047  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:19.497067  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:19.497118  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:19.683726  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:19.683783  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:19.683827  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:19.700779  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:19.736622  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:19.736652  791016 retry.go:31] will retry after 290.006963ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:20.062197  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:20.062228  791016 retry.go:31] will retry after 316.758379ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:20.414876  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:20.414939  791016 retry.go:31] will retry after 431.588331ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:20.882854  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:20.882887  791016 retry.go:31] will retry after 517.588716ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:21.436944  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.437046  791016 provision.go:87] duration metric: took 1.957814295s to configureAuth
	W0917 00:25:21.437060  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.437075  791016 retry.go:31] will retry after 4.850853ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.442274  791016 provision.go:84] configureAuth start
	I0917 00:25:21.442352  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:21.459812  791016 provision.go:143] copyHostCerts
	I0917 00:25:21.459852  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:21.459889  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:21.459915  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:21.459981  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:21.460083  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:21.460111  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:21.460118  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:21.460154  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:21.460230  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:21.460255  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:21.460265  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:21.460298  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:21.460379  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:21.629816  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:21.629893  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:21.629972  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:21.647193  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:21.682201  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.682241  791016 retry.go:31] will retry after 126.915617ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:21.845401  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:21.845437  791016 retry.go:31] will retry after 469.570747ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:22.351442  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:22.351471  791016 retry.go:31] will retry after 507.616138ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:22.895718  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:22.895753  791016 retry.go:31] will retry after 740.220603ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:23.672589  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:23.672691  791016 provision.go:87] duration metric: took 2.230395673s to configureAuth
	W0917 00:25:23.672706  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:23.672720  791016 retry.go:31] will retry after 6.27654ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:23.679963  791016 provision.go:84] configureAuth start
	I0917 00:25:23.680048  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:23.698019  791016 provision.go:143] copyHostCerts
	I0917 00:25:23.698055  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:23.698084  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:23.698094  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:23.698150  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:23.698231  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:23.698249  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:23.698256  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:23.698277  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:23.698337  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:23.698355  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:23.698361  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:23.698380  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:23.698429  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:24.029562  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:24.029627  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:24.029680  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:24.047756  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:24.083251  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:24.083280  791016 retry.go:31] will retry after 306.883934ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:24.426968  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:24.427002  791016 retry.go:31] will retry after 551.664172ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:25.015455  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.015500  791016 retry.go:31] will retry after 393.354081ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:25.443750  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.443843  791016 retry.go:31] will retry after 209.025309ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.653338  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:25.671012  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:25.706190  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:25.706224  791016 retry.go:31] will retry after 337.41418ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:26.080787  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:26.080820  791016 retry.go:31] will retry after 315.469689ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:26.432569  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:26.432605  791016 retry.go:31] will retry after 312.231441ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:26.780798  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:26.780835  791016 retry.go:31] will retry after 483.843039ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:27.300600  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.300689  791016 provision.go:87] duration metric: took 3.620701809s to configureAuth
	W0917 00:25:27.300702  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.300718  791016 retry.go:31] will retry after 11.695348ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.312945  791016 provision.go:84] configureAuth start
	I0917 00:25:27.313032  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:27.330556  791016 provision.go:143] copyHostCerts
	I0917 00:25:27.330602  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:27.330635  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:27.330647  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:27.330822  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:27.331014  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:27.331047  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:27.331058  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:27.331099  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:27.331173  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:27.331200  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:27.331210  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:27.331244  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:27.331317  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:27.629838  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:27.629916  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:27.629953  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:27.647177  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:27.683485  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.683515  791016 retry.go:31] will retry after 138.123235ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:27.856935  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:27.856982  791016 retry.go:31] will retry after 436.619432ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:28.329524  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:28.329555  791016 retry.go:31] will retry after 467.020117ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:28.833937  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:28.834026  791016 retry.go:31] will retry after 183.2183ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.017423  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:29.034786  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:29.069940  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.069976  791016 retry.go:31] will retry after 166.546001ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:29.272729  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.272758  791016 retry.go:31] will retry after 198.842029ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:29.507282  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:29.507312  791016 retry.go:31] will retry after 555.640977ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:30.100015  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.100125  791016 provision.go:87] duration metric: took 2.787150007s to configureAuth
	W0917 00:25:30.100139  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.100157  791016 retry.go:31] will retry after 11.223573ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.112404  791016 provision.go:84] configureAuth start
	I0917 00:25:30.112484  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:30.131065  791016 provision.go:143] copyHostCerts
	I0917 00:25:30.131109  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:30.131147  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:30.131156  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:30.131248  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:30.131358  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:30.131386  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:30.131393  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:30.131431  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:30.131524  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:30.131544  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:30.131551  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:30.131573  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:30.131628  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:30.575141  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:30.575202  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:30.575252  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:30.592938  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:30.628649  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:30.628681  791016 retry.go:31] will retry after 366.602106ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:31.031508  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:31.031550  791016 retry.go:31] will retry after 275.917946ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:31.347177  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:31.347212  791016 retry.go:31] will retry after 745.1072ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:32.128387  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.128472  791016 retry.go:31] will retry after 200.656021ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.329946  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:32.347751  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:32.383095  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.383126  791016 retry.go:31] will retry after 270.30765ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:32.689393  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:32.689427  791016 retry.go:31] will retry after 386.377583ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:33.111945  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.111985  791016 retry.go:31] will retry after 779.601898ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:33.927500  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.927602  791016 provision.go:87] duration metric: took 3.815171721s to configureAuth
	W0917 00:25:33.927617  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.927631  791016 retry.go:31] will retry after 25.310066ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:33.953841  791016 provision.go:84] configureAuth start
	I0917 00:25:33.953971  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:33.971697  791016 provision.go:143] copyHostCerts
	I0917 00:25:33.971740  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:33.971778  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:33.971790  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:33.971858  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:33.971998  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:33.972029  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:33.972037  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:33.972076  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:33.972149  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:33.972177  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:33.972185  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:33.972232  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:33.972310  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:34.221689  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:34.221771  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:34.221812  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:34.239922  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:34.276302  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:34.276335  791016 retry.go:31] will retry after 202.741431ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:34.515638  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:34.515671  791016 retry.go:31] will retry after 330.700518ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:34.886306  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:34.886340  791016 retry.go:31] will retry after 464.499956ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:35.387217  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:35.387246  791016 retry.go:31] will retry after 834.737314ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:36.257758  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.257850  791016 provision.go:87] duration metric: took 2.303985725s to configureAuth
	W0917 00:25:36.257863  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.257878  791016 retry.go:31] will retry after 14.936659ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.273125  791016 provision.go:84] configureAuth start
	I0917 00:25:36.273246  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:36.290248  791016 provision.go:143] copyHostCerts
	I0917 00:25:36.290292  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:36.290321  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:36.290332  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:36.290396  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:36.290473  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:36.290492  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:36.290496  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:36.290517  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:36.290572  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:36.290590  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:36.290596  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:36.290616  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:36.290666  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:36.498343  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:36.498408  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:36.498443  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:36.515898  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:36.552435  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.552465  791016 retry.go:31] will retry after 180.61757ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:36.769325  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:36.769358  791016 retry.go:31] will retry after 562.132822ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:37.368228  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:37.368264  791016 retry.go:31] will retry after 544.785898ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:37.949256  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:37.949359  791016 retry.go:31] will retry after 128.292209ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.078770  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:38.097675  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:38.133371  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.133402  791016 retry.go:31] will retry after 352.391784ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:38.521888  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:38.521955  791016 retry.go:31] will retry after 460.42605ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:39.018110  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.018148  791016 retry.go:31] will retry after 387.428687ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:39.441428  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.441509  791016 provision.go:87] duration metric: took 3.168355202s to configureAuth
	W0917 00:25:39.441518  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.441529  791016 retry.go:31] will retry after 29.479848ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.471745  791016 provision.go:84] configureAuth start
	I0917 00:25:39.471861  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:39.489985  791016 provision.go:143] copyHostCerts
	I0917 00:25:39.490027  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:39.490063  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:39.490073  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:39.490138  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:39.490218  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:39.490235  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:39.490242  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:39.490263  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:39.490310  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:39.490326  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:39.490332  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:39.490353  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:39.490429  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:39.837444  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:39.837517  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:39.837561  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:39.855699  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:39.892374  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:39.892402  791016 retry.go:31] will retry after 339.257174ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:40.267805  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:40.267841  791016 retry.go:31] will retry after 430.368382ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:40.733710  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:40.733747  791016 retry.go:31] will retry after 574.039985ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:41.344413  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.344514  791016 retry.go:31] will retry after 220.875911ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.566059  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:41.583584  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:41.620538  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.620570  791016 retry.go:31] will retry after 285.005928ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:41.941411  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:41.941447  791016 retry.go:31] will retry after 277.918377ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:42.255712  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:42.255751  791016 retry.go:31] will retry after 471.129173ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:42.762606  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:42.762646  791016 retry.go:31] will retry after 740.22815ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:43.538086  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.538195  791016 provision.go:87] duration metric: took 4.06640215s to configureAuth
	W0917 00:25:43.538209  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.538230  791016 retry.go:31] will retry after 37.466882ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.576470  791016 provision.go:84] configureAuth start
	I0917 00:25:43.576585  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:43.594402  791016 provision.go:143] copyHostCerts
	I0917 00:25:43.594468  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:43.594516  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:43.594529  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:43.594592  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:43.594691  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:43.594717  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:43.594725  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:43.594761  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:43.594830  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:43.594855  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:43.594864  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:43.594894  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:43.595010  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:43.877318  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:43.877382  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:43.877416  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:43.896784  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:43.932298  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:43.932330  791016 retry.go:31] will retry after 235.376507ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:44.204731  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.204762  791016 retry.go:31] will retry after 243.192801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:44.484438  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.484471  791016 retry.go:31] will retry after 467.521838ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:44.987761  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:44.987792  791016 retry.go:31] will retry after 440.455179ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:45.464476  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.464564  791016 retry.go:31] will retry after 235.586322ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.700291  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:45.717698  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:45.754106  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.754133  791016 retry.go:31] will retry after 195.54121ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:45.985831  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:45.985859  791016 retry.go:31] will retry after 418.816392ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:46.441057  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.441088  791016 retry.go:31] will retry after 374.559798ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:46.852817  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.852921  791016 provision.go:87] duration metric: took 3.276390875s to configureAuth
	W0917 00:25:46.852942  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.852959  791016 retry.go:31] will retry after 83.017266ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:46.936222  791016 provision.go:84] configureAuth start
	I0917 00:25:46.936327  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:46.953837  791016 provision.go:143] copyHostCerts
	I0917 00:25:46.953876  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:46.953916  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:46.953926  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:46.953994  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:46.954075  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:46.954100  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:46.954107  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:46.954129  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:46.954173  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:46.954192  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:46.954197  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:46.954217  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:46.954267  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:47.247232  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:47.247295  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:47.247330  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:47.264843  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:47.300678  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.300707  791016 retry.go:31] will retry after 180.912565ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:47.518282  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.518316  791016 retry.go:31] will retry after 370.390241ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:47.924210  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:47.924247  791016 retry.go:31] will retry after 540.421858ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:48.500616  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:48.500710  791016 retry.go:31] will retry after 231.87747ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:48.733102  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:48.751254  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:48.787314  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:48.787350  791016 retry.go:31] will retry after 259.477269ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:49.083609  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:49.083645  791016 retry.go:31] will retry after 362.863033ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:49.482344  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:49.482375  791016 retry.go:31] will retry after 826.172003ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:50.344419  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.344523  791016 provision.go:87] duration metric: took 3.408273466s to configureAuth
	W0917 00:25:50.344542  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.344557  791016 retry.go:31] will retry after 143.029284ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:50.487719  791016 provision.go:84] configureAuth start
	I0917 00:25:50.487824  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:50.506244  791016 provision.go:143] copyHostCerts
	I0917 00:25:50.506288  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:50.506326  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:50.506339  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:50.506410  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:50.506525  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:50.506545  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:50.506549  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:50.506571  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:50.506615  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:50.506632  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:50.506638  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:50.506657  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:50.506706  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:51.057390  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:51.057452  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:51.057498  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:51.075369  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:51.110807  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:51.110841  791016 retry.go:31] will retry after 261.420902ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:51.408489  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:51.408521  791016 retry.go:31] will retry after 457.951037ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:51.902924  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:51.902960  791016 retry.go:31] will retry after 555.51636ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:52.494245  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:52.494277  791016 retry.go:31] will retry after 502.444655ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:53.032775  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.032876  791016 provision.go:87] duration metric: took 2.545127126s to configureAuth
	W0917 00:25:53.032887  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.032922  791016 retry.go:31] will retry after 132.723075ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.166223  791016 provision.go:84] configureAuth start
	I0917 00:25:53.166336  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:53.184729  791016 provision.go:143] copyHostCerts
	I0917 00:25:53.184765  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:53.184796  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:53.184806  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:53.184861  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:53.185004  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:53.185026  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:53.185031  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:53.185056  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:53.185106  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:53.185131  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:53.185137  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:53.185169  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:53.185241  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:53.405462  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:53.405529  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:53.405566  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:53.423724  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:53.460846  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.460884  791016 retry.go:31] will retry after 358.963449ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:53.855720  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:53.855751  791016 retry.go:31] will retry after 447.864842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:54.340046  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:54.340083  791016 retry.go:31] will retry after 421.446936ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:54.797579  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:54.797675  791016 retry.go:31] will retry after 353.967853ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.152271  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:55.169297  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:55.204258  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.204297  791016 retry.go:31] will retry after 278.657731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:55.519656  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.519692  791016 retry.go:31] will retry after 208.336638ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:55.764495  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:55.764531  791016 retry.go:31] will retry after 283.294437ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:56.084010  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:56.084045  791016 retry.go:31] will retry after 1.063783665s: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:57.184001  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:57.184096  791016 provision.go:87] duration metric: took 4.017831036s to configureAuth
	W0917 00:25:57.184108  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:57.184119  791016 retry.go:31] will retry after 378.928957ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:57.563691  791016 provision.go:84] configureAuth start
	I0917 00:25:57.563778  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:25:57.581822  791016 provision.go:143] copyHostCerts
	I0917 00:25:57.581868  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:57.581899  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:25:57.581923  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:25:57.581992  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:25:57.582073  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:57.582095  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:25:57.582102  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:25:57.582127  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:25:57.582173  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:57.582197  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:25:57.582203  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:25:57.582221  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:25:57.582309  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:25:58.132305  791016 provision.go:177] copyRemoteCerts
	I0917 00:25:58.132367  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:25:58.132400  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:58.149902  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:58.185053  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:58.185090  791016 retry.go:31] will retry after 212.996056ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:58.433521  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:58.433549  791016 retry.go:31] will retry after 216.913128ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:58.686168  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:58.686197  791016 retry.go:31] will retry after 655.131011ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:25:59.377369  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.377450  791016 retry.go:31] will retry after 205.148257ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.582864  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:25:59.604554  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:25:59.640460  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:25:59.640495  791016 retry.go:31] will retry after 329.057785ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:00.007985  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:00.008024  791016 retry.go:31] will retry after 536.951443ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:00.581652  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:00.581690  791016 retry.go:31] will retry after 536.690401ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:01.155338  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:01.155433  791016 provision.go:87] duration metric: took 3.591713623s to configureAuth
	W0917 00:26:01.155445  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:01.155462  791016 retry.go:31] will retry after 622.316963ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:01.777972  791016 provision.go:84] configureAuth start
	I0917 00:26:01.778089  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:01.796128  791016 provision.go:143] copyHostCerts
	I0917 00:26:01.796164  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:01.796194  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:01.796201  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:01.796259  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:01.796336  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:01.796354  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:01.796361  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:01.796381  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:01.796425  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:01.796441  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:01.796447  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:01.796466  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:01.796523  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:02.270708  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:02.270783  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:02.270825  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:02.289557  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:02.325352  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.325380  791016 retry.go:31] will retry after 165.164388ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:02.526457  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.526494  791016 retry.go:31] will retry after 421.940684ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:02.985238  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:02.985271  791016 retry.go:31] will retry after 756.233115ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:03.777794  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:03.777898  791016 retry.go:31] will retry after 362.951024ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.141610  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:04.159786  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:04.196169  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.196203  791016 retry.go:31] will retry after 352.114514ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:04.584706  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.584741  791016 retry.go:31] will retry after 236.165759ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:04.856300  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:04.856330  791016 retry.go:31] will retry after 329.150146ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:05.220887  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:05.220941  791016 retry.go:31] will retry after 832.300856ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:06.089574  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:06.089684  791016 provision.go:87] duration metric: took 4.311683722s to configureAuth
	W0917 00:26:06.089698  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:06.089711  791016 retry.go:31] will retry after 747.062346ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:06.837550  791016 provision.go:84] configureAuth start
	I0917 00:26:06.837663  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:06.854985  791016 provision.go:143] copyHostCerts
	I0917 00:26:06.855025  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:06.855062  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:06.855077  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:06.855162  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:06.855261  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:06.855289  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:06.855313  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:06.855351  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:06.855415  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:06.855439  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:06.855447  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:06.855473  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:06.855543  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:07.186545  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:07.186614  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:07.186651  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:07.204967  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:07.240895  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:07.240993  791016 retry.go:31] will retry after 168.762413ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:07.446091  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:07.446121  791016 retry.go:31] will retry after 434.540683ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:07.917493  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:07.917531  791016 retry.go:31] will retry after 701.606273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:08.655641  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:08.655724  791016 retry.go:31] will retry after 320.530213ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:08.977392  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:08.995492  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:09.031193  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:09.031221  791016 retry.go:31] will retry after 191.167982ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:09.258892  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:09.258953  791016 retry.go:31] will retry after 454.439774ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:09.749896  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:09.749949  791016 retry.go:31] will retry after 825.076652ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:10.611548  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:10.611625  791016 provision.go:87] duration metric: took 3.774028836s to configureAuth
	W0917 00:26:10.611634  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:10.611644  791016 retry.go:31] will retry after 1.309627243s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:11.922057  791016 provision.go:84] configureAuth start
	I0917 00:26:11.922182  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:11.939828  791016 provision.go:143] copyHostCerts
	I0917 00:26:11.939864  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:11.939891  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:11.939898  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:11.939986  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:11.940075  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:11.940094  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:11.940101  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:11.940123  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:11.940169  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:11.940191  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:11.940198  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:11.940217  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:11.940303  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:12.110010  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:12.110072  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:12.110108  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:12.128184  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:12.164303  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:12.164340  791016 retry.go:31] will retry after 339.722995ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:12.540417  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:12.540451  791016 retry.go:31] will retry after 335.702574ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:12.911688  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:12.911716  791016 retry.go:31] will retry after 605.279338ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:13.552353  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:13.552425  791016 retry.go:31] will retry after 229.36283ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:13.782969  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:13.803921  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:13.840242  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:13.840277  791016 retry.go:31] will retry after 206.955206ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:14.084438  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:14.084468  791016 retry.go:31] will retry after 289.625439ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:14.410419  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:14.410451  791016 retry.go:31] will retry after 792.244108ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:15.238421  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:15.238506  791016 provision.go:87] duration metric: took 3.316415805s to configureAuth
	W0917 00:26:15.238518  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:15.238536  791016 retry.go:31] will retry after 2.156331292s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.395523  791016 provision.go:84] configureAuth start
	I0917 00:26:17.395612  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:17.413608  791016 provision.go:143] copyHostCerts
	I0917 00:26:17.413651  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:17.413683  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:17.413693  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:17.413747  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:17.413841  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:17.413863  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:17.413869  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:17.413891  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:17.413973  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:17.413992  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:17.414000  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:17.414021  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:17.414073  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:17.562638  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:17.562714  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:17.562769  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:17.581673  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:17.619191  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.619225  791016 retry.go:31] will retry after 169.359395ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:17.824944  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:17.824986  791016 retry.go:31] will retry after 561.831267ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:18.424226  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:18.424260  791016 retry.go:31] will retry after 531.694204ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:18.992199  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:18.992233  791016 retry.go:31] will retry after 494.76273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:19.523693  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:19.523772  791016 provision.go:87] duration metric: took 2.128222413s to configureAuth
	W0917 00:26:19.523787  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:19.523798  791016 retry.go:31] will retry after 3.318889156s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:22.843734  791016 provision.go:84] configureAuth start
	I0917 00:26:22.843830  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:22.861152  791016 provision.go:143] copyHostCerts
	I0917 00:26:22.861191  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:22.861227  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:22.861236  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:22.861288  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:22.861367  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:22.861386  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:22.861393  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:22.861415  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:22.861459  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:22.861475  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:22.861481  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:22.861499  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:22.861601  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:23.052424  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:23.052485  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:23.052521  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:23.069689  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:23.105081  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.105108  791016 retry.go:31] will retry after 349.300156ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:23.490547  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.490580  791016 retry.go:31] will retry after 224.689981ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:23.754667  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:23.754699  791016 retry.go:31] will retry after 397.257295ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:24.188087  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.188181  791016 retry.go:31] will retry after 233.82005ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.422610  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:24.441161  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:24.477396  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.477429  791016 retry.go:31] will retry after 217.93614ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:24.731162  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:24.731195  791016 retry.go:31] will retry after 543.106744ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:25.310425  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:25.310458  791016 retry.go:31] will retry after 677.952876ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:26.025241  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:26.025345  791016 provision.go:87] duration metric: took 3.181582431s to configureAuth
	W0917 00:26:26.025358  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:26.025378  791016 retry.go:31] will retry after 2.937511032s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:28.964067  791016 provision.go:84] configureAuth start
	I0917 00:26:28.964159  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:28.981410  791016 provision.go:143] copyHostCerts
	I0917 00:26:28.981446  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:28.981476  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:28.981485  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:28.981541  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:28.981616  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:28.981636  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:28.981643  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:28.981663  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:28.981706  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:28.981725  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:28.981731  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:28.981752  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:28.981803  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:29.817472  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:29.817531  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:29.817565  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:29.836010  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:29.871566  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:29.871593  791016 retry.go:31] will retry after 365.955083ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:30.273441  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:30.273480  791016 retry.go:31] will retry after 299.47315ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:30.609936  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:30.609981  791016 retry.go:31] will retry after 464.139848ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:31.110461  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.110559  791016 retry.go:31] will retry after 281.938805ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.393153  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:31.412126  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:31.448031  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.448060  791016 retry.go:31] will retry after 240.674801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:31.726392  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:31.726437  791016 retry.go:31] will retry after 519.604443ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:32.282247  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:32.282275  791016 retry.go:31] will retry after 382.48499ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:32.701204  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:32.701236  791016 retry.go:31] will retry after 692.255212ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:33.429731  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:33.429835  791016 provision.go:87] duration metric: took 4.465739293s to configureAuth
	W0917 00:26:33.429848  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:33.429863  791016 retry.go:31] will retry after 5.272755601s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:38.703985  791016 provision.go:84] configureAuth start
	I0917 00:26:38.704103  791016 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-198834-m04
	I0917 00:26:38.721357  791016 provision.go:143] copyHostCerts
	I0917 00:26:38.721395  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:38.721432  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:26:38.721441  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:26:38.721519  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:26:38.721609  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:38.721630  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:26:38.721637  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:26:38.721663  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:26:38.721708  791016 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:38.721725  791016 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:26:38.721731  791016 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:26:38.721749  791016 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:26:38.721830  791016 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.ha-198834-m04 san=[127.0.0.1 192.168.49.5 ha-198834-m04 localhost minikube]
	I0917 00:26:38.866248  791016 provision.go:177] copyRemoteCerts
	I0917 00:26:38.866317  791016 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:38.866370  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:38.884241  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:38.919665  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:38.919696  791016 retry.go:31] will retry after 235.506838ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:39.191745  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:39.191789  791016 retry.go:31] will retry after 390.014802ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:39.619248  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:39.619277  791016 retry.go:31] will retry after 571.493485ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:40.225994  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.226067  791016 retry.go:31] will retry after 216.613249ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.443463  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:40.462158  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:40.498610  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.498658  791016 retry.go:31] will retry after 374.596845ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:40.909441  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:40.909473  791016 retry.go:31] will retry after 298.991353ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:41.245148  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:41.245180  791016 retry.go:31] will retry after 514.820757ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:41.797231  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:41.797273  791016 retry.go:31] will retry after 582.996085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:42.417629  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.417736  791016 provision.go:87] duration metric: took 3.713721614s to configureAuth
	W0917 00:26:42.417749  791016 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.417763  791016 ubuntu.go:202] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.417777  791016 machine.go:96] duration metric: took 10m58.374511119s to provisionDockerMachine
	I0917 00:26:42.417855  791016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:26:42.417888  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:42.435191  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:42.470768  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.470801  791016 retry.go:31] will retry after 345.968132ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:42.853264  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:42.853295  791016 retry.go:31] will retry after 554.061651ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:43.443002  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:43.443035  791016 retry.go:31] will retry after 543.13258ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:44.022801  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.022890  791016 retry.go:31] will retry after 370.797414ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.394558  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:44.412159  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:44.447565  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.447595  791016 retry.go:31] will retry after 247.565285ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:44.731705  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:44.731739  791016 retry.go:31] will retry after 493.651528ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:45.262011  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:45.262044  791016 retry.go:31] will retry after 795.250603ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.093432  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.093527  791016 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.093543  791016 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.093596  791016 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:26:46.093646  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:46.111002  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:46.146831  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.146864  791016 retry.go:31] will retry after 125.228986ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.308502  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.308548  791016 retry.go:31] will retry after 489.138767ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:46.834015  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:46.834048  791016 retry.go:31] will retry after 417.464824ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:47.288306  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:47.288416  791016 retry.go:31] will retry after 372.538514ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:47.661780  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:47.679654  791016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/ha-198834-m04/id_rsa Username:docker}
	W0917 00:26:47.714898  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:47.714965  791016 retry.go:31] will retry after 343.045789ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:48.093992  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:48.094028  791016 retry.go:31] will retry after 370.55891ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:48.500717  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:48.500754  791016 retry.go:31] will retry after 705.998326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243081  791016 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243187  791016 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243205  791016 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:49.243211  791016 fix.go:56] duration metric: took 11m5.520925064s for fixHost
	I0917 00:26:49.243218  791016 start.go:83] releasing machines lock for "ha-198834-m04", held for 11m5.520957344s
	W0917 00:26:49.243238  791016 start.go:714] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:26:49.243324  791016 out.go:285] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:26:49.243336  791016 start.go:729] Will try again in 5 seconds ...
	I0917 00:26:54.245406  791016 start.go:360] acquireMachinesLock for ha-198834-m04: {Name:mk2f111f4f1780d2988fd31ee5547db80611d2b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:26:54.245544  791016 start.go:364] duration metric: took 79.986µs to acquireMachinesLock for "ha-198834-m04"
	I0917 00:26:54.245570  791016 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:26:54.245586  791016 fix.go:54] fixHost starting: m04
	I0917 00:26:54.245870  791016 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:26:54.265001  791016 fix.go:112] recreateIfNeeded on ha-198834-m04: state=Running err=<nil>
	W0917 00:26:54.265028  791016 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:26:54.267198  791016 out.go:252] * Updating the running docker "ha-198834-m04" container ...
	I0917 00:26:54.267265  791016 machine.go:93] provisionDockerMachine start ...
	I0917 00:26:54.267347  791016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-198834-m04
	I0917 00:26:54.285375  791016 main.go:141] libmachine: Using SSH client type: native
	I0917 00:26:54.285585  791016 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0917 00:26:54.285596  791016 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:26:54.321170  791016 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	
	
	==> Docker <==
	Sep 17 00:14:59 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:14:59Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fd69455479fe49678a69e6c15e7428cf2e0933a67e62ce21b42adc2ddffbbc50\""
	Sep 17 00:14:59 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:14:59Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bfd4ac8a61c79e2858011e2eb2ea54afa70f0971e7fa5ea4f41775c0a5fcbba2\""
	Sep 17 00:14:59 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:14:59Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-pstjp_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4da814f488dc35aa80427876bce77b335fc3a2333320170df1e542d7dbf76b68\""
	Sep 17 00:14:59 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:14:59Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-pstjp_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2ab50e090d466ea69554546e2e97ae7ed7a7527c0e0d169e99a4862ba0516a41\""
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"3f97e150fa11bdf3f45ed1747639547d748b2a4aebbb6b2fd647b7ea95cf2657\". Proceed without further sandbox information."
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"7ffde546949d7dc2194840f8724f47f8feb0de61810c4950f4fa9641a29af5b7\". Proceed without further sandbox information."
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"d6bbb58cc14ca6b1d1f49ce78ce96b4d4d266dd75b65b5850279ae1c23b942d6\". Proceed without further sandbox information."
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"364803df34eb0e8c3659d04f50bad11d47920ff9744963e312971b443ed63976\". Proceed without further sandbox information."
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/94a4aaa50848c5c46902d2f5f31027d8fa69909020863736bbe4ed925a1aaf49/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/500a8b6d35e8316014170bacd6defbc26c5a5e70be4ea416f72ac4006aa4c54f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/381fff0989ff3d5f8ea2a962c384f2466e8f9fb4f229954ff715e6ede48ee180/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8bbb93a7ec96af1254cb3dedcfe54459e7e6abaca5aecc6a45d588c52c92e1ef/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d6726df5c2b9b06de48cea683ee8bc82c675f98dfe4ac9e9493e034ce46b7afd/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-pstjp_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4da814f488dc35aa80427876bce77b335fc3a2333320170df1e542d7dbf76b68\""
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mjbz6_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ec5acf265466354c265f4a5a6c47300c16e052d876e5b879f13c8cb25513d1df\""
	Sep 17 00:15:00 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5wx4k_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fd69455479fe49678a69e6c15e7428cf2e0933a67e62ce21b42adc2ddffbbc50\""
	Sep 17 00:15:03 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:03Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 17 00:15:05 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2e519619d3edf15ce02442d1b8d92e3b0e6563ffda776b14d1c8c20814d7f233/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:15:05 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/069c670e595fd97d7c7efcd201067221b5ca97b483a668b9d47fb40c1cee647f/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:15:05 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3c634681b75023ab8a2cd0017f7e9c735d2105e242c5011c1a53c986e83202ba/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 17 00:15:05 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ce40d4eecf1a62f360d0e8f22f237730ada79017437a87da49f4482df49a01f3/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:15:05 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bdec2b9cd0c4ff794d0a5be3a91123e9825972453df2a492b37ff676a7fc6beb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:15:05 ha-198834 cri-dockerd[1142]: time="2025-09-17T00:15:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/155fce19ca1f4956a2c24fdbe64666b63b3691275f68f433f49a4cd6f11e2525/resolv.conf as [nameserver 192.168.49.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:15:05 ha-198834 dockerd[819]: time="2025-09-17T00:15:05.483797822Z" level=info msg="ignoring event" container=a92cc25702b2854696a3b116c0f213bbf93cfd5dfef9f95338e323fa2370e103 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:15:36 ha-198834 dockerd[819]: time="2025-09-17T00:15:36.157964635Z" level=info msg="ignoring event" container=078654c5590a8817aff737ce7eed4c34fe5ca78bc651d7cec28a29c4a519eb3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5b5823dca6846       765655ea60781       11 minutes ago      Running             kube-vip                  3                   500a8b6d35e83       kube-vip-ha-198834
	37da656b2f1e6       6e38f40d628db       11 minutes ago      Running             storage-provisioner       4                   3c634681b7502       storage-provisioner
	4aaffb44988e8       df0860106674d       11 minutes ago      Running             kube-proxy                2                   069c670e595fd       kube-proxy-5tkhn
	2385b48b87ce9       409467f978b4a       11 minutes ago      Running             kindnet-cni               2                   155fce19ca1f4       kindnet-h28vp
	bc84630c8d02c       52546a367cc9e       11 minutes ago      Running             coredns                   4                   ce40d4eecf1a6       coredns-66bc5c9577-mjbz6
	ccc0ade4ba579       8c811b4aec35f       11 minutes ago      Running             busybox                   2                   bdec2b9cd0c4f       busybox-7b57f96db7-pstjp
	3ea6923436827       52546a367cc9e       11 minutes ago      Running             coredns                   4                   2e519619d3edf       coredns-66bc5c9577-5wx4k
	a92cc25702b28       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       3                   3c634681b7502       storage-provisioner
	a12e5381891d7       a0af72f2ec6d6       11 minutes ago      Running             kube-controller-manager   2                   94a4aaa50848c       kube-controller-manager-ha-198834
	fe185998877e5       90550c43ad2bc       11 minutes ago      Running             kube-apiserver            2                   d6726df5c2b9b       kube-apiserver-ha-198834
	cf774422f8560       5f1f5298c888d       11 minutes ago      Running             etcd                      2                   381fff0989ff3       etcd-ha-198834
	d13adfe60db79       46169d968e920       11 minutes ago      Running             kube-scheduler            2                   8bbb93a7ec96a       kube-scheduler-ha-198834
	078654c5590a8       765655ea60781       11 minutes ago      Exited              kube-vip                  2                   500a8b6d35e83       kube-vip-ha-198834
	bdc52003487f9       409467f978b4a       20 minutes ago      Exited              kindnet-cni               1                   13e9e86bdcc31       kindnet-h28vp
	d130ec085d5ce       8c811b4aec35f       20 minutes ago      Exited              busybox                   1                   4da814f488dc3       busybox-7b57f96db7-pstjp
	19c8584dae1b9       52546a367cc9e       20 minutes ago      Exited              coredns                   3                   fd69455479fe4       coredns-66bc5c9577-5wx4k
	8a501078c4170       52546a367cc9e       20 minutes ago      Exited              coredns                   3                   ec5acf2654663       coredns-66bc5c9577-mjbz6
	21dff06737d90       df0860106674d       20 minutes ago      Exited              kube-proxy                1                   02337f9cf4b12       kube-proxy-5tkhn
	e5f91b76238c9       a0af72f2ec6d6       20 minutes ago      Exited              kube-controller-manager   1                   af005efeb3a09       kube-controller-manager-ha-198834
	371ff065d1dfd       46169d968e920       20 minutes ago      Exited              kube-scheduler            1                   7a13ea6d24610       kube-scheduler-ha-198834
	7b047b1099553       5f1f5298c888d       20 minutes ago      Exited              etcd                      1                   beb17aaed35c3       etcd-ha-198834
	9f5475377594b       90550c43ad2bc       20 minutes ago      Exited              kube-apiserver            1                   b47695e7722ae       kube-apiserver-ha-198834
	
	
	==> coredns [19c8584dae1b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53538 - 29295 "HINFO IN 9023489977302481875.6206531949632663336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037239604s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3ea692343682] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59407 - 55967 "HINFO IN 5689088512537161935.5038682222800020406. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030247814s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [8a501078c417] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35492 - 21170 "HINFO IN 5429275037699935078.1019057475364754304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034969536s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bc84630c8d02] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45492 - 55461 "HINFO IN 6009941948480674143.4563724060648553547. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029246847s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-198834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:26:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:23:22 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:23:22 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:23:22 +0000   Tue, 16 Sep 2025 23:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:23:22 +0000   Tue, 16 Sep 2025 23:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-198834
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 134e1f7d2806496b84b4697c0fe10c3d
	  System UUID:                70b73bcc-60ff-4343-a209-12ec7b2f4c5a
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-pstjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 coredns-66bc5c9577-5wx4k             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 coredns-66bc5c9577-mjbz6             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 etcd-ha-198834                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-h28vp                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-198834             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-198834    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-5tkhn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-198834             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-198834                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 29m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           29m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           20m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x9 over 12m)  kubelet          Node ha-198834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node ha-198834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-198834 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-198834 event: Registered Node ha-198834 in Controller
	
	
	Name:               ha-198834-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198834-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-198834
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_16T23_57_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198834-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:26:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:21:53 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:21:53 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:21:53 +0000   Tue, 16 Sep 2025 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:21:53 +0000   Wed, 17 Sep 2025 00:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-198834-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 a16295c5554f4cf4a746ecfe43bb9dc6
	  System UUID:                0c81ae9e-e051-426a-b3a5-724dde7bd0d3
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kg4q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 etcd-ha-198834-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-2vbn5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-198834-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-198834-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-h2fxd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-198834-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-198834-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  RegisteredNode           29m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           29m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           20m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-198834-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-198834-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-198834-m02 event: Registered Node ha-198834-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +4.978924] IPv4: martian source 10.244.0.1 from 10.244.0.28, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 72 94 9b 14 ba 08 06
	[  +0.000493] IPv4: martian source 10.244.0.28 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:51] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae a4 31 55 21 41 08 06
	[  +0.000514] IPv4: martian source 10.244.0.32 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[  +0.000564] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 7f 09 ee 64 b6 08 06
	[Sep16 23:52] IPv4: martian source 10.244.0.33 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 d7 9b ce 2e 89 08 06
	[  +0.314795] IPv4: martian source 10.244.0.27 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 36 22 af 75 97 08 06
	[Sep16 23:54] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 29 1f 42 ac 54 08 06
	[  +0.101248] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 b5 d6 ff 8d 76 08 06
	[ +45.338162] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 e6 31 2b 22 43 08 06
	[Sep16 23:55] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 85 bd 7a a7 08 06
	[Sep16 23:56] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 1c d5 f1 cd b8 08 06
	
	
	==> etcd [7b047b109955] <==
	{"level":"warn","ts":"2025-09-17T00:14:41.370828Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:14:38.878803Z","time spent":"2.492013635s","remote":"127.0.0.1:51944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":0,"request content":"key:\"/registry/flowschemas\" limit:1 "}
	2025/09/17 00:14:41 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"error","ts":"2025-09-17T00:14:41.442512Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:14:41.442635Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:14:41.442684Z","caller":"etcdserver/server.go:1272","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"warn","ts":"2025-09-17T00:14:41.442716Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:14:41.442827Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:14:41.442851Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:14:41.442744Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:14:41.442870Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:14:41.442879Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:14:41.442785Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-17T00:14:41.442795Z","caller":"etcdserver/server.go:900","msg":"failed to revoke lease","lease-id":"70cc9954fe939b07","error":"etcdserver: request cancelled"}
	{"level":"info","ts":"2025-09-17T00:14:41.442787Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T00:14:41.442948Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:14:41.442967Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:14:41.443008Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:14:41.443036Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:14:41.443080Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:14:41.443105Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:14:41.443125Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"5303da23f403d0c1"}
	{"level":"info","ts":"2025-09-17T00:14:41.445127Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:14:41.445190Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:14:41.445225Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:14:41.445234Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-198834","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [cf774422f856] <==
	{"level":"warn","ts":"2025-09-17T00:15:43.110113Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:36.310722Z","time spent":"6.799378494s","remote":"127.0.0.1:53030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/pods/kube-system/kube-vip-ha-198834\" limit:1 "}
	{"level":"warn","ts":"2025-09-17T00:15:43.110165Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.90092939s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2025-09-17T00:15:43.110193Z","caller":"traceutil/trace.go:172","msg":"trace[1304307806] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; }","duration":"7.900958581s","start":"2025-09-17T00:15:35.209227Z","end":"2025-09-17T00:15:43.110186Z","steps":["trace[1304307806] 'agreement among raft nodes before linearized reading'  (duration: 7.900929193s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:15:43.110216Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:35.209210Z","time spent":"7.900999775s","remote":"127.0.0.1:52768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":0,"response size":0,"request content":"key:\"/registry/masterleases/192.168.49.2\" limit:1 "}
	{"level":"warn","ts":"2025-09-17T00:15:43.110243Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"11.085760131s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2025-09-17T00:15:43.110268Z","caller":"traceutil/trace.go:172","msg":"trace[80477706] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"11.085785421s","start":"2025-09-17T00:15:32.024476Z","end":"2025-09-17T00:15:43.110261Z","steps":["trace[80477706] 'agreement among raft nodes before linearized reading'  (duration: 11.085760488s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:15:43.110288Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:32.024453Z","time spent":"11.085828013s","remote":"127.0.0.1:53006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-09-17T00:15:43.110322Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"11.634346963s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-66bc5c9577-mjbz6.1865e9a1fb3f45b8\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2025-09-17T00:15:43.110347Z","caller":"traceutil/trace.go:172","msg":"trace[1838563252] range","detail":"{range_begin:/registry/events/kube-system/coredns-66bc5c9577-mjbz6.1865e9a1fb3f45b8; range_end:; }","duration":"11.634372021s","start":"2025-09-17T00:15:31.475968Z","end":"2025-09-17T00:15:43.110340Z","steps":["trace[1838563252] 'agreement among raft nodes before linearized reading'  (duration: 11.634347147s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:15:43.110366Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:31.475947Z","time spent":"11.634413273s","remote":"127.0.0.1:52844","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":0,"response size":0,"request content":"key:\"/registry/events/kube-system/coredns-66bc5c9577-mjbz6.1865e9a1fb3f45b8\" limit:1 "}
	{"level":"info","ts":"2025-09-17T00:15:43.119418Z","caller":"traceutil/trace.go:172","msg":"trace[1776354289] transaction","detail":"{read_only:false; response_revision:4411; number_of_response:1; }","duration":"1.700705224s","start":"2025-09-17T00:15:41.418701Z","end":"2025-09-17T00:15:43.119406Z","steps":["trace[1776354289] 'process raft request'  (duration: 1.700608985s)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:15:43.119759Z","caller":"traceutil/trace.go:172","msg":"trace[1478184252] transaction","detail":"{read_only:false; response_revision:4412; number_of_response:1; }","duration":"1.312543015s","start":"2025-09-17T00:15:41.807199Z","end":"2025-09-17T00:15:43.119742Z","steps":["trace[1478184252] 'process raft request'  (duration: 1.312183617s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:15:43.120115Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:41.418680Z","time spent":"1.700784125s","remote":"127.0.0.1:53162","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":525,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-198834\" mod_revision:4400 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-198834\" value_size:475 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-198834\" > >"}
	{"level":"warn","ts":"2025-09-17T00:15:43.120173Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:41.807176Z","time spent":"1.312623721s","remote":"127.0.0.1:53162","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":674,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hxygcsz4tng6hmluvaoa4vlmha\" mod_revision:4401 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hxygcsz4tng6hmluvaoa4vlmha\" value_size:601 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hxygcsz4tng6hmluvaoa4vlmha\" > >"}
	{"level":"warn","ts":"2025-09-17T00:15:43.120220Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"5.943357997s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-198834-m02\" limit:1 ","response":"range_response_count:1 size:4775"}
	{"level":"info","ts":"2025-09-17T00:15:43.120264Z","caller":"traceutil/trace.go:172","msg":"trace[1707712025] range","detail":"{range_begin:/registry/minions/ha-198834-m02; range_end:; response_count:1; response_revision:4412; }","duration":"5.94340223s","start":"2025-09-17T00:15:37.176851Z","end":"2025-09-17T00:15:43.120254Z","steps":["trace[1707712025] 'agreement among raft nodes before linearized reading'  (duration: 5.943280976s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:15:43.120294Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:37.176836Z","time spent":"5.943451247s","remote":"127.0.0.1:53016","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":4799,"request content":"key:\"/registry/minions/ha-198834-m02\" limit:1 "}
	{"level":"warn","ts":"2025-09-17T00:15:43.120088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"6.068640182s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:15:43.120508Z","caller":"traceutil/trace.go:172","msg":"trace[231650644] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:4412; }","duration":"6.069066305s","start":"2025-09-17T00:15:37.051430Z","end":"2025-09-17T00:15:43.120496Z","steps":["trace[231650644] 'agreement among raft nodes before linearized reading'  (duration: 6.068619533s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:15:43.120579Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"762.085908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:15:43.121315Z","caller":"traceutil/trace.go:172","msg":"trace[1019430679] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:4412; }","duration":"762.825212ms","start":"2025-09-17T00:15:42.358479Z","end":"2025-09-17T00:15:43.121305Z","steps":["trace[1019430679] 'agreement among raft nodes before linearized reading'  (duration: 761.590114ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:15:43.121349Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:15:42.358464Z","time spent":"762.874946ms","remote":"127.0.0.1:55278","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-09-17T00:25:02.738804Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":5108}
	{"level":"info","ts":"2025-09-17T00:25:02.820493Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":5108,"took":"81.067881ms","hash":3983705498,"current-db-size-bytes":9728000,"current-db-size":"9.7 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-17T00:25:02.820549Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3983705498,"revision":5108,"compact-revision":-1}
	
	
	==> kernel <==
	 00:26:59 up  3:09,  0 users,  load average: 0.20, 0.48, 0.98
	Linux ha-198834 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2385b48b87ce] <==
	I0917 00:25:56.029776       1 main.go:301] handling current node
	I0917 00:26:06.037870       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:06.037922       1 main.go:301] handling current node
	I0917 00:26:06.037945       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:06.037954       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:26:16.029354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:16.029390       1 main.go:301] handling current node
	I0917 00:26:16.029406       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:16.029411       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:26:26.030997       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:26.031045       1 main.go:301] handling current node
	I0917 00:26:26.031066       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:26.031073       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:26:36.036046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:36.036087       1 main.go:301] handling current node
	I0917 00:26:36.036106       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:36.036111       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:26:46.028995       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:46.029027       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:26:46.029254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:46.029269       1 main.go:301] handling current node
	I0917 00:26:56.028993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:56.029033       1 main.go:301] handling current node
	I0917 00:26:56.029052       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:26:56.029059       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [bdc52003487f] <==
	I0917 00:13:51.563147       1 main.go:301] handling current node
	I0917 00:13:51.563166       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:13:51.563171       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:13:51.563440       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:13:51.563450       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:01.562311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:01.562353       1 main.go:301] handling current node
	I0917 00:14:01.562369       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:01.562373       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:01.562589       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:01.562603       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:11.571668       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:11.571702       1 main.go:301] handling current node
	I0917 00:14:11.571718       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:11.571723       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:11.571936       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:14:11.571959       1 main.go:324] Node ha-198834-m03 has CIDR [10.244.2.0/24] 
	I0917 00:14:21.562648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:21.562681       1 main.go:301] handling current node
	I0917 00:14:21.562695       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:21.562699       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	I0917 00:14:31.563092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:14:31.563133       1 main.go:301] handling current node
	I0917 00:14:31.563191       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:14:31.563205       1 main.go:324] Node ha-198834-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9f5475377594] <==
	W0917 00:14:41.185550       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.185694       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.185744       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.185832       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.186130       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.186192       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.186520       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.187056       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.187471       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.187514       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.187628       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.187694       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:14:41.188440       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	{"level":"warn","ts":"2025-09-17T00:14:41.191975Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0014f0960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0917 00:14:41.192103       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:14:41.192133       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0917 00:14:41.192149       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 4.875µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0917 00:14:41.193343       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:14:41.193491       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.411513ms" method="PATCH" path="/api/v1/namespaces/kube-system/events/kube-apiserver-ha-198834.1865e92d29a60d78" result=null
	{"level":"warn","ts":"2025-09-17T00:14:41.214826Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001657860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0917 00:14:41.214978       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0917 00:14:41.215496       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:14:41.216589       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0917 00:14:41.216675       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0917 00:14:41.217923       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.109786ms" method="GET" path="/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result=null
	
	
	==> kube-apiserver [fe185998877e] <==
	E0917 00:15:43.110221       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:15:43.110221       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:15:43.110256       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0917 00:15:43.110486       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-09-17T00:15:43.112438Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001a52f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W0917 00:15:43.147844       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0917 00:16:16.582988       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:16.824222       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:27.503438       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:35.007165       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:42.814616       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:18:48.482411       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:19:52.650078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:20:12.089234       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:10.131989       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:38.730805       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:22:16.010739       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:23:04.739296       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:23:41.718624       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:24:17.837220       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:25:03.832474       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:25:08.473035       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:25:45.191762       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:26:12.094098       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:26:52.667520       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [a12e5381891d] <==
	I0917 00:15:07.248864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:15:27.167887       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:27.167948       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:27.167954       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:27.167959       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:27.167963       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:47.168974       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:47.169019       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:47.169032       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:47.169037       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:15:47.169045       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	I0917 00:15:47.179900       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d8brp"
	I0917 00:15:47.200166       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d8brp"
	I0917 00:15:47.200226       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-198834-m03"
	I0917 00:15:47.219540       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-198834-m03"
	I0917 00:15:47.219581       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-198834-m03"
	I0917 00:15:47.241206       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-198834-m03"
	I0917 00:15:47.241243       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-198834-m03"
	I0917 00:15:47.258842       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-198834-m03"
	I0917 00:15:47.258921       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-198834-m03"
	I0917 00:15:47.276588       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-198834-m03"
	I0917 00:15:47.276626       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-67fn9"
	I0917 00:15:47.297221       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-67fn9"
	I0917 00:15:47.297260       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-198834-m03"
	I0917 00:15:47.316581       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-198834-m03"
	
	
	==> kube-controller-manager [e5f91b76238c] <==
	I0917 00:06:42.688133       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:06:42.688192       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:06:42.688272       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:06:42.688535       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:06:42.688667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:06:42.689165       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m03"
	I0917 00:06:42.689227       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834"
	I0917 00:06:42.689234       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:06:42.689307       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198834-m02"
	I0917 00:06:42.689381       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:06:42.689800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:06:42.690667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:06:42.694964       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:06:42.699163       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:06:42.700692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:06:42.713986       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:06:42.717269       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:06:42.722438       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:06:42.724798       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:06:42.752877       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:14:22.716991       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717039       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717048       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717085       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	E0917 00:14:22.717092       1 gc_controller.go:151] "Failed to get node" err="node \"ha-198834-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198834-m03"
	
	
	==> kube-proxy [21dff06737d9] <==
	I0917 00:06:40.905839       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:06:40.968196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 00:06:44.060317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-198834&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0917 00:06:45.568444       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:06:45.568482       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:06:45.568583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:06:45.590735       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:06:45.590782       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:06:45.596121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:06:45.596463       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:06:45.596508       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:45.597774       1 config.go:200] "Starting service config controller"
	I0917 00:06:45.597791       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:06:45.597883       1 config.go:309] "Starting node config controller"
	I0917 00:06:45.597987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:06:45.598035       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:06:45.598042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:06:45.598039       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:06:45.598057       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:06:45.698355       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:06:45.698442       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:06:45.698447       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:06:45.698470       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [4aaffb44988e] <==
	I0917 00:15:05.475585       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:15:05.537002       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 00:15:08.636309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-198834&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0917 00:15:10.138668       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:15:10.138707       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:15:10.138824       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:15:10.159830       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:15:10.159962       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:15:10.165377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:15:10.165783       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:15:10.165819       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:15:10.167472       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:15:10.167507       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:15:10.167509       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:15:10.167524       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:15:10.167534       1 config.go:200] "Starting service config controller"
	I0917 00:15:10.167541       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:15:10.167751       1 config.go:309] "Starting node config controller"
	I0917 00:15:10.167759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:15:10.167766       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:15:10.268082       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:15:10.268117       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:15:10.268125       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [371ff065d1df] <==
	I0917 00:06:34.304210       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:06:39.358570       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 00:06:39.358610       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 00:06:39.358624       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:06:39.358634       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:06:39.390353       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:06:39.390375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:06:39.392538       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392576       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:06:39.392924       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:06:39.392961       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:06:39.493239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:14:41.170943       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:14:41.171077       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:14:41.171135       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:14:41.176638       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:14:41.176662       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:14:41.176739       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d13adfe60db7] <==
	I0917 00:15:00.733524       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:15:03.791258       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 00:15:03.791293       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 00:15:03.791306       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:15:03.791317       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:15:03.838653       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:15:03.838690       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:15:03.846309       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:15:03.846541       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:15:03.847637       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:15:03.847771       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:15:03.946762       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:24:49 ha-198834 kubelet[1360]: E0917 00:24:49.897177    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543050 maxSize=10485760
	Sep 17 00:24:59 ha-198834 kubelet[1360]: E0917 00:24:59.902525    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:24:59 ha-198834 kubelet[1360]: E0917 00:24:59.902632    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543050 maxSize=10485760
	Sep 17 00:25:09 ha-198834 kubelet[1360]: E0917 00:25:09.908473    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:25:09 ha-198834 kubelet[1360]: E0917 00:25:09.908577    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543395 maxSize=10485760
	Sep 17 00:25:19 ha-198834 kubelet[1360]: E0917 00:25:19.911716    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:25:19 ha-198834 kubelet[1360]: E0917 00:25:19.911801    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543395 maxSize=10485760
	Sep 17 00:25:29 ha-198834 kubelet[1360]: E0917 00:25:29.916095    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:25:29 ha-198834 kubelet[1360]: E0917 00:25:29.916202    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543395 maxSize=10485760
	Sep 17 00:25:39 ha-198834 kubelet[1360]: E0917 00:25:39.921823    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:25:39 ha-198834 kubelet[1360]: E0917 00:25:39.921993    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543395 maxSize=10485760
	Sep 17 00:25:49 ha-198834 kubelet[1360]: E0917 00:25:49.925825    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:25:49 ha-198834 kubelet[1360]: E0917 00:25:49.925949    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543560 maxSize=10485760
	Sep 17 00:25:59 ha-198834 kubelet[1360]: E0917 00:25:59.931724    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:25:59 ha-198834 kubelet[1360]: E0917 00:25:59.931836    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543560 maxSize=10485760
	Sep 17 00:26:09 ha-198834 kubelet[1360]: E0917 00:26:09.934144    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:26:09 ha-198834 kubelet[1360]: E0917 00:26:09.934240    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543560 maxSize=10485760
	Sep 17 00:26:19 ha-198834 kubelet[1360]: E0917 00:26:19.936922    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:26:19 ha-198834 kubelet[1360]: E0917 00:26:19.937014    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543725 maxSize=10485760
	Sep 17 00:26:29 ha-198834 kubelet[1360]: E0917 00:26:29.941963    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:26:29 ha-198834 kubelet[1360]: E0917 00:26:29.942062    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543725 maxSize=10485760
	Sep 17 00:26:39 ha-198834 kubelet[1360]: E0917 00:26:39.948050    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:26:39 ha-198834 kubelet[1360]: E0917 00:26:39.948156    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543725 maxSize=10485760
	Sep 17 00:26:49 ha-198834 kubelet[1360]: E0917 00:26:49.949813    1360 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3"
	Sep 17 00:26:49 ha-198834 kubelet[1360]: E0917 00:26:49.949926    1360 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log\": failed to reopen container log \"fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fe185998877e52ac2e6d48f4d1efbb09a1c38db094dc1602b7d68d8e355c15d3" path="/var/log/pods/kube-system_kube-apiserver-ha-198834_43a2f3bce5dec1a4e1f92007c9922b7e/kube-apiserver/2.log" currentSize=15543725 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198834 -n ha-198834
helpers_test.go:269: (dbg) Run:  kubectl --context ha-198834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-xfzdd
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-198834 describe pod busybox-7b57f96db7-xfzdd
helpers_test.go:290: (dbg) kubectl --context ha-198834 describe pod busybox-7b57f96db7-xfzdd:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-xfzdd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z55j5 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-z55j5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  12m                   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  12m                   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  12m                   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  12m                   default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11m                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  116s (x2 over 6m56s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  12m (x2 over 12m)     default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11m                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  63s (x3 over 11m)     default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (728.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-131853 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-131853 -n newest-cni-131853
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-131853 -n newest-cni-131853: exit status 2 (330.388577ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-131853 -n newest-cni-131853
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-131853 -n newest-cni-131853: exit status 2 (367.777804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-131853 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-131853 -n newest-cni-131853
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-131853 -n newest-cni-131853: exit status 2 (371.85636ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-131853 -n newest-cni-131853
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-131853 -n newest-cni-131853: exit status 2 (407.568246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-131853
helpers_test.go:243: (dbg) docker inspect newest-cni-131853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5",
	        "Created": "2025-09-17T00:48:41.103833985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1062872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:20.503481218Z",
	            "FinishedAt": "2025-09-17T00:49:19.702454833Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/hosts",
	        "LogPath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5-json.log",
	        "Name": "/newest-cni-131853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-131853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-131853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5",
	                "LowerDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-131853",
	                "Source": "/var/lib/docker/volumes/newest-cni-131853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-131853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-131853",
	                "name.minikube.sigs.k8s.io": "newest-cni-131853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "308a7be2975b571ad72bfad9dec8e91e274c878c0b48665655a0dab61deb5c3a",
	            "SandboxKey": "/var/run/docker/netns/308a7be2975b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-131853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:e9:0d:da:87:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d4557b311633dd1244c0791d913116cec0ac391db4447b38fc8fa55426ee83f0",
	                    "EndpointID": "0fa5902e6086f0adf0085a10e26e169c91ef6030cf4afd479a98285efc377b40",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-131853",
	                        "e38559e7f80c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-131853 -n newest-cni-131853
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-131853 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-131853 logs -n 25: (1.213757946s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p no-preload-152605 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:46 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker                                                                                                                             │ kubernetes-upgrade-401604    │ jenkins │ v1.37.0 │ 17 Sep 25 00:47 UTC │                     │
	│ start   │ -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker                                                                                                      │ kubernetes-upgrade-401604    │ jenkins │ v1.37.0 │ 17 Sep 25 00:47 UTC │ 17 Sep 25 00:48 UTC │
	│ start   │ -p cert-expiration-843787 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker                                                                                                                                     │ cert-expiration-843787       │ jenkins │ v1.37.0 │ 17 Sep 25 00:47 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p kubernetes-upgrade-401604                                                                                                                                                                                                                    │ kubernetes-upgrade-401604    │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p cert-expiration-843787                                                                                                                                                                                                                       │ cert-expiration-843787       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ start   │ -p embed-certs-411882 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-411882           │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-171805                                                                                                                                                                                                                 │ disable-driver-mounts-171805 │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ start   │ -p default-k8s-diff-port-990042 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-990042 │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:49 UTC │
	│ image   │ old-k8s-version-591839 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ pause   │ -p old-k8s-version-591839 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ unpause │ -p old-k8s-version-591839 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p old-k8s-version-591839                                                                                                                                                                                                                       │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p old-k8s-version-591839                                                                                                                                                                                                                       │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ start   │ -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-131853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ stop    │ -p newest-cni-131853 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-131853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p no-preload-152605 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ image   │ newest-cni-131853 image list --format=json                                                                                                                                                                                                      │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-131853 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p no-preload-152605 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-131853 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p no-preload-152605                                                                                                                                                                                                                            │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:49:20
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:49:20.270587 1062675 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:49:20.270855 1062675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:20.270869 1062675 out.go:374] Setting ErrFile to fd 2...
	I0917 00:49:20.270878 1062675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:20.271078 1062675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:49:20.271576 1062675 out.go:368] Setting JSON to false
	I0917 00:49:20.273032 1062675 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12692,"bootTime":1758057468,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:49:20.273123 1062675 start.go:140] virtualization: kvm guest
	I0917 00:49:20.275167 1062675 out.go:179] * [newest-cni-131853] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:49:20.276597 1062675 notify.go:220] Checking for updates...
	I0917 00:49:20.276605 1062675 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:49:20.278090 1062675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:49:20.279414 1062675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:49:20.280770 1062675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:49:20.282262 1062675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:49:20.283589 1062675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:49:20.286364 1062675 config.go:182] Loaded profile config "newest-cni-131853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:49:20.287167 1062675 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:49:20.312432 1062675 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:49:20.312581 1062675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:49:20.373271 1062675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:49:20.362198883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:49:20.373427 1062675 docker.go:318] overlay module found
	I0917 00:49:20.375104 1062675 out.go:179] * Using the docker driver based on existing profile
	I0917 00:49:20.376176 1062675 start.go:304] selected driver: docker
	I0917 00:49:20.376190 1062675 start.go:918] validating driver "docker" against &{Name:newest-cni-131853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-131853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:49:20.376300 1062675 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:49:20.377035 1062675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:49:20.431609 1062675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:49:20.421767683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:49:20.431887 1062675 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 00:49:20.431927 1062675 cni.go:84] Creating CNI manager for ""
	I0917 00:49:20.432004 1062675 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 00:49:20.432048 1062675 start.go:348] cluster config:
	{Name:newest-cni-131853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-131853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:49:20.433946 1062675 out.go:179] * Starting "newest-cni-131853" primary control-plane node in "newest-cni-131853" cluster
	I0917 00:49:20.435058 1062675 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:49:20.436446 1062675 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:49:20.437551 1062675 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:49:20.437578 1062675 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:49:20.437596 1062675 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:49:20.437609 1062675 cache.go:58] Caching tarball of preloaded images
	I0917 00:49:20.437694 1062675 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:49:20.437716 1062675 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:49:20.437847 1062675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/config.json ...
	I0917 00:49:20.458610 1062675 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:49:20.458635 1062675 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:49:20.458654 1062675 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:49:20.458684 1062675 start.go:360] acquireMachinesLock for newest-cni-131853: {Name:mk57dcfcbc0d2313f9abb6d06628ceb853f38092 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:49:20.458752 1062675 start.go:364] duration metric: took 44.217µs to acquireMachinesLock for "newest-cni-131853"
	I0917 00:49:20.458774 1062675 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:49:20.458783 1062675 fix.go:54] fixHost starting: 
	I0917 00:49:20.459070 1062675 cli_runner.go:164] Run: docker container inspect newest-cni-131853 --format={{.State.Status}}
	I0917 00:49:20.476054 1062675 fix.go:112] recreateIfNeeded on newest-cni-131853: state=Stopped err=<nil>
	W0917 00:49:20.476090 1062675 fix.go:138] unexpected machine state, will restart: <nil>
	W0917 00:49:18.790214 1045703 pod_ready.go:104] pod "coredns-66bc5c9577-6sg84" is not "Ready", error: <nil>
	W0917 00:49:21.290577 1045703 pod_ready.go:104] pod "coredns-66bc5c9577-6sg84" is not "Ready", error: <nil>
	W0917 00:49:18.288439 1046037 pod_ready.go:104] pod "coredns-66bc5c9577-lkf52" is not "Ready", error: <nil>
	W0917 00:49:20.289082 1046037 pod_ready.go:104] pod "coredns-66bc5c9577-lkf52" is not "Ready", error: <nil>
	W0917 00:49:19.058105 1028016 pod_ready.go:104] pod "coredns-66bc5c9577-v7s5z" is not "Ready", error: <nil>
	W0917 00:49:21.059034 1028016 pod_ready.go:104] pod "coredns-66bc5c9577-v7s5z" is not "Ready", error: <nil>
	I0917 00:49:23.557847 1028016 pod_ready.go:94] pod "coredns-66bc5c9577-v7s5z" is "Ready"
	I0917 00:49:23.557874 1028016 pod_ready.go:86] duration metric: took 2m20.505483011s for pod "coredns-66bc5c9577-v7s5z" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:23.560300 1028016 pod_ready.go:83] waiting for pod "etcd-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:23.563835 1028016 pod_ready.go:94] pod "etcd-no-preload-152605" is "Ready"
	I0917 00:49:23.563853 1028016 pod_ready.go:86] duration metric: took 3.528715ms for pod "etcd-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:23.565734 1028016 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:23.569117 1028016 pod_ready.go:94] pod "kube-apiserver-no-preload-152605" is "Ready"
	I0917 00:49:23.569135 1028016 pod_ready.go:86] duration metric: took 3.383706ms for pod "kube-apiserver-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:23.570962 1028016 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:23.756634 1028016 pod_ready.go:94] pod "kube-controller-manager-no-preload-152605" is "Ready"
	I0917 00:49:23.756664 1028016 pod_ready.go:86] duration metric: took 185.677248ms for pod "kube-controller-manager-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:23.957272 1028016 pod_ready.go:83] waiting for pod "kube-proxy-qqhrd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:24.357104 1028016 pod_ready.go:94] pod "kube-proxy-qqhrd" is "Ready"
	I0917 00:49:24.357136 1028016 pod_ready.go:86] duration metric: took 399.83387ms for pod "kube-proxy-qqhrd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:24.556834 1028016 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:24.956546 1028016 pod_ready.go:94] pod "kube-scheduler-no-preload-152605" is "Ready"
	I0917 00:49:24.956579 1028016 pod_ready.go:86] duration metric: took 399.712368ms for pod "kube-scheduler-no-preload-152605" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:24.956593 1028016 pod_ready.go:40] duration metric: took 2m21.907936354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:49:25.004406 1028016 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:49:25.006538 1028016 out.go:179] * Done! kubectl is now configured to use "no-preload-152605" cluster and "default" namespace by default
	I0917 00:49:20.478015 1062675 out.go:252] * Restarting existing docker container for "newest-cni-131853" ...
	I0917 00:49:20.478084 1062675 cli_runner.go:164] Run: docker start newest-cni-131853
	I0917 00:49:20.736820 1062675 cli_runner.go:164] Run: docker container inspect newest-cni-131853 --format={{.State.Status}}
	I0917 00:49:20.757705 1062675 kic.go:430] container "newest-cni-131853" state is running.
	I0917 00:49:20.758172 1062675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-131853
	I0917 00:49:20.778405 1062675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/config.json ...
	I0917 00:49:20.778720 1062675 machine.go:93] provisionDockerMachine start ...
	I0917 00:49:20.778810 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:20.800300 1062675 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:20.800538 1062675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0917 00:49:20.800550 1062675 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:49:20.801108 1062675 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35572->127.0.0.1:33110: read: connection reset by peer
	I0917 00:49:23.942264 1062675 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-131853
	
	I0917 00:49:23.942301 1062675 ubuntu.go:182] provisioning hostname "newest-cni-131853"
	I0917 00:49:23.942381 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:23.960836 1062675 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:23.961116 1062675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0917 00:49:23.961131 1062675 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-131853 && echo "newest-cni-131853" | sudo tee /etc/hostname
	I0917 00:49:24.112111 1062675 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-131853
	
	I0917 00:49:24.112199 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:24.130272 1062675 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:24.130571 1062675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0917 00:49:24.130599 1062675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-131853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-131853/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-131853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:49:24.269142 1062675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:49:24.269183 1062675 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:49:24.269246 1062675 ubuntu.go:190] setting up certificates
	I0917 00:49:24.269261 1062675 provision.go:84] configureAuth start
	I0917 00:49:24.269321 1062675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-131853
	I0917 00:49:24.288057 1062675 provision.go:143] copyHostCerts
	I0917 00:49:24.288141 1062675 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:49:24.288167 1062675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:49:24.288272 1062675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:49:24.288428 1062675 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:49:24.288443 1062675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:49:24.288492 1062675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:49:24.288667 1062675 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:49:24.288678 1062675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:49:24.288722 1062675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:49:24.288801 1062675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.newest-cni-131853 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-131853]
	I0917 00:49:24.483976 1062675 provision.go:177] copyRemoteCerts
	I0917 00:49:24.484041 1062675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:49:24.484080 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:24.503157 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:24.602642 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:49:24.630302 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:49:24.657782 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 00:49:24.685591 1062675 provision.go:87] duration metric: took 416.315047ms to configureAuth
	I0917 00:49:24.685621 1062675 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:49:24.685823 1062675 config.go:182] Loaded profile config "newest-cni-131853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:49:24.685876 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:24.703801 1062675 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:24.704115 1062675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0917 00:49:24.704133 1062675 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:49:24.843868 1062675 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:49:24.843900 1062675 ubuntu.go:71] root file system type: overlay
	I0917 00:49:24.844055 1062675 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:49:24.844131 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:24.862347 1062675 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:24.862560 1062675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0917 00:49:24.862622 1062675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:49:25.013534 1062675 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:49:25.013604 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:25.038541 1062675 main.go:141] libmachine: Using SSH client type: native
	I0917 00:49:25.038740 1062675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33110 <nil> <nil>}
	I0917 00:49:25.038763 1062675 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:49:25.186829 1062675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:49:25.186876 1062675 machine.go:96] duration metric: took 4.408136764s to provisionDockerMachine
	I0917 00:49:25.186893 1062675 start.go:293] postStartSetup for "newest-cni-131853" (driver="docker")
	I0917 00:49:25.186920 1062675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:49:25.186998 1062675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:49:25.187040 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:25.204978 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	W0917 00:49:23.789794 1045703 pod_ready.go:104] pod "coredns-66bc5c9577-6sg84" is not "Ready", error: <nil>
	W0917 00:49:26.289563 1045703 pod_ready.go:104] pod "coredns-66bc5c9577-6sg84" is not "Ready", error: <nil>
	W0917 00:49:22.788955 1046037 pod_ready.go:104] pod "coredns-66bc5c9577-lkf52" is not "Ready", error: <nil>
	W0917 00:49:25.289869 1046037 pod_ready.go:104] pod "coredns-66bc5c9577-lkf52" is not "Ready", error: <nil>
	I0917 00:49:25.306014 1062675 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:49:25.309857 1062675 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:49:25.309897 1062675 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:49:25.309921 1062675 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:49:25.309930 1062675 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:49:25.309943 1062675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:49:25.310012 1062675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:49:25.310104 1062675 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:49:25.310211 1062675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:49:25.320873 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:49:25.348347 1062675 start.go:296] duration metric: took 161.435689ms for postStartSetup
	I0917 00:49:25.348440 1062675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:49:25.348490 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:25.366772 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:25.460259 1062675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:49:25.464931 1062675 fix.go:56] duration metric: took 5.006138717s for fixHost
	I0917 00:49:25.464959 1062675 start.go:83] releasing machines lock for "newest-cni-131853", held for 5.006195487s
	I0917 00:49:25.465019 1062675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-131853
	I0917 00:49:25.482122 1062675 ssh_runner.go:195] Run: cat /version.json
	I0917 00:49:25.482196 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:25.482235 1062675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:49:25.482304 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:25.501196 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:25.501814 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:25.667098 1062675 ssh_runner.go:195] Run: systemctl --version
	I0917 00:49:25.672045 1062675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:49:25.676821 1062675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:49:25.697514 1062675 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:49:25.697610 1062675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:49:25.708220 1062675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:49:25.708257 1062675 start.go:495] detecting cgroup driver to use...
	I0917 00:49:25.708296 1062675 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:49:25.708397 1062675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:49:25.727306 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:49:25.738336 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:49:25.749696 1062675 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:49:25.749766 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:49:25.760844 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:49:25.771493 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:49:25.782475 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:49:25.794800 1062675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:49:25.805813 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:49:25.816593 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:49:25.828235 1062675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:49:25.839699 1062675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:49:25.848632 1062675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:49:25.858335 1062675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:49:25.928301 1062675 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:49:26.002063 1062675 start.go:495] detecting cgroup driver to use...
	I0917 00:49:26.002118 1062675 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:49:26.002172 1062675 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:49:26.017171 1062675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:49:26.029025 1062675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:49:26.047321 1062675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:49:26.060000 1062675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:49:26.073823 1062675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:49:26.093188 1062675 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:49:26.097069 1062675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:49:26.106973 1062675 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:49:26.125861 1062675 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:49:26.198717 1062675 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:49:26.269374 1062675 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:49:26.269473 1062675 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:49:26.291316 1062675 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:49:26.303188 1062675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:49:26.379740 1062675 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:49:27.239132 1062675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:49:27.251479 1062675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:49:27.264134 1062675 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 00:49:27.277513 1062675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:49:27.291326 1062675 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:49:27.366680 1062675 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:49:27.435998 1062675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:49:27.506604 1062675 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:49:27.529355 1062675 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:49:27.541741 1062675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:49:27.612773 1062675 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:49:27.696410 1062675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:49:27.708807 1062675 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:49:27.708871 1062675 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:49:27.712696 1062675 start.go:563] Will wait 60s for crictl version
	I0917 00:49:27.712758 1062675 ssh_runner.go:195] Run: which crictl
	I0917 00:49:27.716126 1062675 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:49:27.751335 1062675 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:49:27.751403 1062675 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:49:27.778489 1062675 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:49:27.809037 1062675 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:49:27.809116 1062675 cli_runner.go:164] Run: docker network inspect newest-cni-131853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:49:27.826226 1062675 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 00:49:27.830427 1062675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:49:27.843729 1062675 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0917 00:49:27.844843 1062675 kubeadm.go:875] updating cluster {Name:newest-cni-131853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-131853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:49:27.845025 1062675 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:49:27.845082 1062675 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:49:27.865852 1062675 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:49:27.865881 1062675 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:49:27.865967 1062675 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:49:27.887212 1062675 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:49:27.887240 1062675 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:49:27.887252 1062675 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 docker true true} ...
	I0917 00:49:27.887368 1062675 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-131853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-131853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:49:27.887435 1062675 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:49:27.939476 1062675 cni.go:84] Creating CNI manager for ""
	I0917 00:49:27.939522 1062675 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 00:49:27.939539 1062675 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0917 00:49:27.939567 1062675 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-131853 NodeName:newest-cni-131853 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:49:27.939733 1062675 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-131853"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:49:27.939811 1062675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:49:27.949653 1062675 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:49:27.949734 1062675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:49:27.959081 1062675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 00:49:27.978383 1062675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:49:27.998337 1062675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0917 00:49:28.017403 1062675 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:49:28.022007 1062675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:49:28.033865 1062675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:49:28.111637 1062675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:49:28.141561 1062675 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853 for IP: 192.168.103.2
	I0917 00:49:28.141587 1062675 certs.go:194] generating shared ca certs ...
	I0917 00:49:28.141610 1062675 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:49:28.141783 1062675 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:49:28.141857 1062675 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:49:28.141877 1062675 certs.go:256] generating profile certs ...
	I0917 00:49:28.142035 1062675 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/client.key
	I0917 00:49:28.142114 1062675 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/apiserver.key.b7e9c203
	I0917 00:49:28.142173 1062675 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/proxy-client.key
	I0917 00:49:28.142316 1062675 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:49:28.142358 1062675 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:49:28.142370 1062675 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:49:28.142401 1062675 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:49:28.142432 1062675 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:49:28.142460 1062675 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:49:28.142513 1062675 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:49:28.143311 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:49:28.170551 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:49:28.199228 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:49:28.234189 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:49:28.268238 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:49:28.298408 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:49:28.327069 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:49:28.358180 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/newest-cni-131853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 00:49:28.386584 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:49:28.413943 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:49:28.444505 1062675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:49:28.470990 1062675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:49:28.490364 1062675 ssh_runner.go:195] Run: openssl version
	I0917 00:49:28.496552 1062675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:49:28.507335 1062675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:49:28.511228 1062675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:49:28.511278 1062675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:49:28.518592 1062675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:49:28.528750 1062675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:49:28.539248 1062675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:49:28.543247 1062675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:49:28.543308 1062675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:49:28.550702 1062675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:49:28.560687 1062675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:49:28.571352 1062675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:49:28.575330 1062675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:49:28.575383 1062675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:49:28.582755 1062675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:49:28.593048 1062675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:49:28.596996 1062675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:49:28.604023 1062675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:49:28.611021 1062675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:49:28.618566 1062675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:49:28.626042 1062675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:49:28.633231 1062675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:49:28.640632 1062675 kubeadm.go:392] StartCluster: {Name:newest-cni-131853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-131853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
String: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:49:28.640792 1062675 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:49:28.661380 1062675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:49:28.673582 1062675 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:49:28.673603 1062675 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:49:28.673654 1062675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:49:28.685533 1062675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:49:28.686506 1062675 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-131853" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:49:28.687248 1062675 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-661878/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-131853" cluster setting kubeconfig missing "newest-cni-131853" context setting]
	I0917 00:49:28.688450 1062675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:49:28.690363 1062675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:49:28.702167 1062675 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0917 00:49:28.702213 1062675 kubeadm.go:593] duration metric: took 28.602357ms to restartPrimaryControlPlane
	I0917 00:49:28.702226 1062675 kubeadm.go:394] duration metric: took 61.609748ms to StartCluster
	I0917 00:49:28.702257 1062675 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:49:28.702337 1062675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:49:28.704646 1062675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:49:28.704940 1062675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:49:28.705195 1062675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:49:28.705312 1062675 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-131853"
	I0917 00:49:28.705333 1062675 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-131853"
	W0917 00:49:28.705342 1062675 addons.go:247] addon storage-provisioner should already be in state true
	I0917 00:49:28.705343 1062675 addons.go:69] Setting dashboard=true in profile "newest-cni-131853"
	I0917 00:49:28.705372 1062675 addons.go:69] Setting default-storageclass=true in profile "newest-cni-131853"
	I0917 00:49:28.705377 1062675 addons.go:238] Setting addon dashboard=true in "newest-cni-131853"
	I0917 00:49:28.705378 1062675 config.go:182] Loaded profile config "newest-cni-131853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	W0917 00:49:28.705388 1062675 addons.go:247] addon dashboard should already be in state true
	I0917 00:49:28.705376 1062675 addons.go:69] Setting metrics-server=true in profile "newest-cni-131853"
	I0917 00:49:28.705393 1062675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-131853"
	I0917 00:49:28.705413 1062675 addons.go:238] Setting addon metrics-server=true in "newest-cni-131853"
	I0917 00:49:28.705379 1062675 host.go:66] Checking if "newest-cni-131853" exists ...
	I0917 00:49:28.705420 1062675 host.go:66] Checking if "newest-cni-131853" exists ...
	W0917 00:49:28.705431 1062675 addons.go:247] addon metrics-server should already be in state true
	I0917 00:49:28.705568 1062675 host.go:66] Checking if "newest-cni-131853" exists ...
	I0917 00:49:28.705751 1062675 cli_runner.go:164] Run: docker container inspect newest-cni-131853 --format={{.State.Status}}
	I0917 00:49:28.705935 1062675 cli_runner.go:164] Run: docker container inspect newest-cni-131853 --format={{.State.Status}}
	I0917 00:49:28.705985 1062675 cli_runner.go:164] Run: docker container inspect newest-cni-131853 --format={{.State.Status}}
	I0917 00:49:28.706116 1062675 cli_runner.go:164] Run: docker container inspect newest-cni-131853 --format={{.State.Status}}
	I0917 00:49:28.707122 1062675 out.go:179] * Verifying Kubernetes components...
	I0917 00:49:28.708774 1062675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:49:28.730472 1062675 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0917 00:49:28.733270 1062675 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0917 00:49:28.734858 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0917 00:49:28.734882 1062675 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0917 00:49:28.735052 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:28.739594 1062675 addons.go:238] Setting addon default-storageclass=true in "newest-cni-131853"
	W0917 00:49:28.739622 1062675 addons.go:247] addon default-storageclass should already be in state true
	I0917 00:49:28.739654 1062675 host.go:66] Checking if "newest-cni-131853" exists ...
	I0917 00:49:28.740070 1062675 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0917 00:49:28.740139 1062675 cli_runner.go:164] Run: docker container inspect newest-cni-131853 --format={{.State.Status}}
	I0917 00:49:28.740154 1062675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:49:28.742058 1062675 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 00:49:28.742148 1062675 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 00:49:28.742092 1062675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:49:28.742240 1062675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:49:28.742318 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:28.742389 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:28.779346 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:28.779346 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:28.784863 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:28.788660 1062675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:49:28.788684 1062675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:49:28.788742 1062675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-131853
	I0917 00:49:28.817597 1062675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33110 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/newest-cni-131853/id_rsa Username:docker}
	I0917 00:49:28.850443 1062675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:49:28.887322 1062675 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:49:28.887401 1062675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:49:28.917886 1062675 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 00:49:28.917927 1062675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0917 00:49:28.921382 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0917 00:49:28.921407 1062675 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0917 00:49:28.924723 1062675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:49:28.939053 1062675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:49:28.949038 1062675 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 00:49:28.949069 1062675 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 00:49:28.952045 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0917 00:49:28.952069 1062675 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0917 00:49:28.986422 1062675 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:49:28.986453 1062675 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 00:49:28.992073 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0917 00:49:28.992103 1062675 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0917 00:49:29.013441 1062675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:49:29.022067 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0917 00:49:29.022097 1062675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0917 00:49:29.024194 1062675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:49:29.024244 1062675 retry.go:31] will retry after 257.027472ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 00:49:29.031379 1062675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:49:29.031418 1062675 retry.go:31] will retry after 148.112851ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:49:29.051615 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0917 00:49:29.051647 1062675 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0917 00:49:29.086760 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0917 00:49:29.086792 1062675 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0917 00:49:29.100404 1062675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:49:29.100445 1062675 retry.go:31] will retry after 339.33914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:49:29.119038 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0917 00:49:29.119072 1062675 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0917 00:49:29.144804 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0917 00:49:29.144833 1062675 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0917 00:49:29.169819 1062675 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 00:49:29.169846 1062675 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0917 00:49:29.180319 1062675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:49:29.200856 1062675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0917 00:49:29.282158 1062675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:49:29.387559 1062675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:49:29.440822 1062675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0917 00:49:28.289668 1045703 pod_ready.go:104] pod "coredns-66bc5c9577-6sg84" is not "Ready", error: <nil>
	W0917 00:49:30.290975 1045703 pod_ready.go:104] pod "coredns-66bc5c9577-6sg84" is not "Ready", error: <nil>
	I0917 00:49:30.790675 1045703 pod_ready.go:94] pod "coredns-66bc5c9577-6sg84" is "Ready"
	I0917 00:49:30.790709 1045703 pod_ready.go:86] duration metric: took 31.006766143s for pod "coredns-66bc5c9577-6sg84" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:30.790722 1045703 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7qvch" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:30.792804 1045703 pod_ready.go:99] pod "coredns-66bc5c9577-7qvch" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-7qvch" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-7qvch" not found
	I0917 00:49:30.792840 1045703 pod_ready.go:86] duration metric: took 2.110291ms for pod "coredns-66bc5c9577-7qvch" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:30.795548 1045703 pod_ready.go:83] waiting for pod "etcd-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:30.799965 1045703 pod_ready.go:94] pod "etcd-embed-certs-411882" is "Ready"
	I0917 00:49:30.799995 1045703 pod_ready.go:86] duration metric: took 4.416417ms for pod "etcd-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:30.802394 1045703 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:30.806340 1045703 pod_ready.go:94] pod "kube-apiserver-embed-certs-411882" is "Ready"
	I0917 00:49:30.806361 1045703 pod_ready.go:86] duration metric: took 3.941277ms for pod "kube-apiserver-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:30.808270 1045703 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:31.188500 1045703 pod_ready.go:94] pod "kube-controller-manager-embed-certs-411882" is "Ready"
	I0917 00:49:31.188542 1045703 pod_ready.go:86] duration metric: took 380.249561ms for pod "kube-controller-manager-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:31.388529 1045703 pod_ready.go:83] waiting for pod "kube-proxy-h7grc" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:31.281557 1062675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.101186144s)
	I0917 00:49:31.729659 1062675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.528753137s)
	I0917 00:49:31.731561 1062675 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-131853 addons enable metrics-server
	
	W0917 00:49:27.788602 1046037 pod_ready.go:104] pod "coredns-66bc5c9577-lkf52" is not "Ready", error: <nil>
	W0917 00:49:29.789004 1046037 pod_ready.go:104] pod "coredns-66bc5c9577-lkf52" is not "Ready", error: <nil>
	W0917 00:49:31.790137 1046037 pod_ready.go:104] pod "coredns-66bc5c9577-lkf52" is not "Ready", error: <nil>
	I0917 00:49:31.899408 1062675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.617197267s)
	I0917 00:49:31.899480 1062675 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.511881241s)
	I0917 00:49:31.899519 1062675 api_server.go:72] duration metric: took 3.194527402s to wait for apiserver process to appear ...
	I0917 00:49:31.899532 1062675 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:49:31.899558 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:31.899562 1062675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.458706682s)
	I0917 00:49:31.899582 1062675 addons.go:479] Verifying addon metrics-server=true in "newest-cni-131853"
	I0917 00:49:31.902462 1062675 out.go:179] * Enabled addons: default-storageclass, dashboard, storage-provisioner, metrics-server
	I0917 00:49:31.788447 1045703 pod_ready.go:94] pod "kube-proxy-h7grc" is "Ready"
	I0917 00:49:31.788477 1045703 pod_ready.go:86] duration metric: took 399.918349ms for pod "kube-proxy-h7grc" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:31.988834 1045703 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:32.388669 1045703 pod_ready.go:94] pod "kube-scheduler-embed-certs-411882" is "Ready"
	I0917 00:49:32.388700 1045703 pod_ready.go:86] duration metric: took 399.837294ms for pod "kube-scheduler-embed-certs-411882" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:32.388724 1045703 pod_ready.go:40] duration metric: took 32.614953389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:49:32.450153 1045703 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:49:32.451826 1045703 out.go:179] * Done! kubectl is now configured to use "embed-certs-411882" cluster and "default" namespace by default
	I0917 00:49:31.905001 1062675 addons.go:514] duration metric: took 3.199799066s for enable addons: enabled=[default-storageclass dashboard storage-provisioner metrics-server]
	I0917 00:49:31.906558 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:31.906595 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:32.400428 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:32.406195 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:32.406227 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:32.899826 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:32.904831 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:32.904863 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:33.400553 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:33.404998 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:33.405023 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:33.899625 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:33.904081 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:33.904126 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:34.399722 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:34.404098 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:34.404124 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:34.899720 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:34.908143 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:34.908183 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:33.788122 1046037 pod_ready.go:94] pod "coredns-66bc5c9577-lkf52" is "Ready"
	I0917 00:49:33.788157 1046037 pod_ready.go:86] duration metric: took 33.506029331s for pod "coredns-66bc5c9577-lkf52" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:33.788172 1046037 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mmbsf" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:33.790018 1046037 pod_ready.go:99] pod "coredns-66bc5c9577-mmbsf" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-mmbsf" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-mmbsf" not found
	I0917 00:49:33.790038 1046037 pod_ready.go:86] duration metric: took 1.859184ms for pod "coredns-66bc5c9577-mmbsf" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:33.792562 1046037 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:33.799123 1046037 pod_ready.go:94] pod "etcd-default-k8s-diff-port-990042" is "Ready"
	I0917 00:49:33.799149 1046037 pod_ready.go:86] duration metric: took 6.560336ms for pod "etcd-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:33.801234 1046037 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:33.804944 1046037 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-990042" is "Ready"
	I0917 00:49:33.804966 1046037 pod_ready.go:86] duration metric: took 3.71129ms for pod "kube-apiserver-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:33.806748 1046037 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:34.185512 1046037 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-990042" is "Ready"
	I0917 00:49:34.185540 1046037 pod_ready.go:86] duration metric: took 378.774284ms for pod "kube-controller-manager-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:34.385886 1046037 pod_ready.go:83] waiting for pod "kube-proxy-2krzg" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:34.786753 1046037 pod_ready.go:94] pod "kube-proxy-2krzg" is "Ready"
	I0917 00:49:34.786782 1046037 pod_ready.go:86] duration metric: took 400.85306ms for pod "kube-proxy-2krzg" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:34.986997 1046037 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:35.386805 1046037 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-990042" is "Ready"
	I0917 00:49:35.386845 1046037 pod_ready.go:86] duration metric: took 399.807005ms for pod "kube-scheduler-default-k8s-diff-port-990042" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:49:35.386860 1046037 pod_ready.go:40] duration metric: took 35.109565658s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:49:35.439142 1046037 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:49:35.441344 1046037 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-990042" cluster and "default" namespace by default
	I0917 00:49:35.400006 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:35.406281 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:35.406319 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:35.899822 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:35.904689 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:49:35.904733 1062675 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:49:36.400101 1062675 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:49:36.404551 1062675 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0917 00:49:36.405720 1062675 api_server.go:141] control plane version: v1.34.0
	I0917 00:49:36.405754 1062675 api_server.go:131] duration metric: took 4.506212791s to wait for apiserver health ...
	I0917 00:49:36.405766 1062675 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:49:36.410112 1062675 system_pods.go:59] 8 kube-system pods found
	I0917 00:49:36.410158 1062675 system_pods.go:61] "coredns-66bc5c9577-2ffbr" [6db5d1df-595a-4608-9cf2-7c0a003b945c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:49:36.410173 1062675 system_pods.go:61] "etcd-newest-cni-131853" [accce24c-5df9-4978-aa40-a96c6cc0c0d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:49:36.410189 1062675 system_pods.go:61] "kube-apiserver-newest-cni-131853" [4c59ee76-d411-43ac-a559-3668d1ef6b69] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:49:36.410198 1062675 system_pods.go:61] "kube-controller-manager-newest-cni-131853" [58d4fd88-18d4-40a5-a10b-4183f1e1b86e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:49:36.410214 1062675 system_pods.go:61] "kube-proxy-h4hhn" [356edd73-a1c8-46ba-9d89-427ac6916857] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:49:36.410226 1062675 system_pods.go:61] "kube-scheduler-newest-cni-131853" [bf4c3ef5-29fe-4a16-b74b-307102a5c145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:49:36.410237 1062675 system_pods.go:61] "metrics-server-746fcd58dc-ncqck" [151011c0-44c2-4d33-bbd8-6cacddbc2c2a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 00:49:36.410249 1062675 system_pods.go:61] "storage-provisioner" [124de67d-22ac-4e6b-9629-fd67623b4857] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:49:36.410258 1062675 system_pods.go:74] duration metric: took 4.483952ms to wait for pod list to return data ...
	I0917 00:49:36.410272 1062675 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:49:36.412692 1062675 default_sa.go:45] found service account: "default"
	I0917 00:49:36.412713 1062675 default_sa.go:55] duration metric: took 2.434123ms for default service account to be created ...
	I0917 00:49:36.412724 1062675 kubeadm.go:578] duration metric: took 7.707733241s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 00:49:36.412746 1062675 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:49:36.415346 1062675 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:49:36.415373 1062675 node_conditions.go:123] node cpu capacity is 8
	I0917 00:49:36.415388 1062675 node_conditions.go:105] duration metric: took 2.632545ms to run NodePressure ...
	I0917 00:49:36.415403 1062675 start.go:241] waiting for startup goroutines ...
	I0917 00:49:36.415418 1062675 start.go:246] waiting for cluster config update ...
	I0917 00:49:36.415435 1062675 start.go:255] writing updated cluster config ...
	I0917 00:49:36.415735 1062675 ssh_runner.go:195] Run: rm -f paused
	I0917 00:49:36.478006 1062675 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:49:36.480113 1062675 out.go:179] * Done! kubectl is now configured to use "newest-cni-131853" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 00:49:27 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:27Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 17 00:49:27 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:27Z" level=info msg="Loaded network plugin cni"
	Sep 17 00:49:27 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:27Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 17 00:49:27 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:27Z" level=info msg="Setting cgroupDriver systemd"
	Sep 17 00:49:27 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:27Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 17 00:49:27 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:27Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 17 00:49:27 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:27Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 17 00:49:27 newest-cni-131853 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 17 00:49:28 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:28Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-2ffbr_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"fb7300b43138738f10592cd885ff065d10de3ea755ddf97407f8ec852a27d28f\""
	Sep 17 00:49:28 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ca39c0273afbec0e7035152b55eabc585abb567a5513c91d006b0c1cd6ce3e2c/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:28 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cb4d8f1af1b6000976ca4c6a2084730b9f73ed04363e1a07e02c2c0ecaccd70c/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 17 00:49:28 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95cde05f4d8087df264a6cf8b8c7bf1025b08e1d95ab770ecb7ac70813cfa5fc/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:28 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e15f186667d29a8e895b49680868e23b056c622f6ca0cad7f7b22fb0ea302822/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e4dba6d985144f48f0503aeb0e1f0e79fc9de67552f55d56fcec426c265ea16e/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14e5d5c51aeca37e736d13de1aa16162adb1733a2b32bac9e35fa5c397962680/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942935f853593d7285670670bfe4d65765fb1a206cd3e4b61150704c3cf81077/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31127331878f54cfcf5d43438f3e72b9daea7ea263adb76f5031e3273240c2cb/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.956552210Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.956639929Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.959605842Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.959650299Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.984303088Z" level=info msg="ignoring event" container=2a5a3009883a4de98a128063205e5ededeb0fa86d439d50466063f16fe15d037 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:49:32 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:32.002021215Z" level=info msg="ignoring event" container=47f4540029fa9b8a5978c078cad6e233b21616905214d9b3ca7b6c90cc269a89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:49:40 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1450243f4737       52546a367cc9e       9 seconds ago       Running             coredns                   1                   942935f853593       coredns-66bc5c9577-2ffbr
	2a5a3009883a4       6e38f40d628db       9 seconds ago       Exited              storage-provisioner       2                   31127331878f5       storage-provisioner
	47f4540029fa9       df0860106674d       9 seconds ago       Exited              kube-proxy                2                   e4dba6d985144       kube-proxy-h4hhn
	bec7c181b9ef1       46169d968e920       11 seconds ago      Running             kube-scheduler            1                   e15f186667d29       kube-scheduler-newest-cni-131853
	6107c7200b643       5f1f5298c888d       12 seconds ago      Running             etcd                      1                   95cde05f4d808       etcd-newest-cni-131853
	32eed4faeb5d2       90550c43ad2bc       12 seconds ago      Running             kube-apiserver            1                   cb4d8f1af1b60       kube-apiserver-newest-cni-131853
	19204e9fcc801       a0af72f2ec6d6       12 seconds ago      Running             kube-controller-manager   1                   ca39c0273afbe       kube-controller-manager-newest-cni-131853
	a73eda8fced35       52546a367cc9e       32 seconds ago      Exited              coredns                   0                   fb7300b431387       coredns-66bc5c9577-2ffbr
	412395648f1ff       5f1f5298c888d       44 seconds ago      Exited              etcd                      0                   47747320ac4eb       etcd-newest-cni-131853
	2e25278666954       46169d968e920       44 seconds ago      Exited              kube-scheduler            0                   92a4b2dd76bbc       kube-scheduler-newest-cni-131853
	973b0a93ff1b3       a0af72f2ec6d6       44 seconds ago      Exited              kube-controller-manager   0                   8b5f2c7f5fbf5       kube-controller-manager-newest-cni-131853
	4cd391b72ee73       90550c43ad2bc       44 seconds ago      Exited              kube-apiserver            0                   13a111ccad0b5       kube-apiserver-newest-cni-131853
	
	
	==> coredns [a73eda8fced3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51471 - 21856 "HINFO IN 1671484569658969291.2329482321069416358. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041513461s
	
	
	==> coredns [c1450243f473] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36329 - 52327 "HINFO IN 6798412162064350935.3265723935977831039. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023614585s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               newest-cni-131853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-131853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=newest-cni-131853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_49_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:48:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-131853
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:49:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-131853
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 71e9e84fc0994429ac85cf071094fa5e
	  System UUID:                744a73e7-ffc9-4fe9-8238-88dd32596d74
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2ffbr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     33s
	  kube-system                 etcd-newest-cni-131853                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kube-apiserver-newest-cni-131853              250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-newest-cni-131853     200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-h4hhn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-131853              100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 metrics-server-746fcd58dc-ncqck               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vgrn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qw4jq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x7 over 44s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           34s                node-controller  Node newest-cni-131853 event: Registered Node newest-cni-131853 in Controller
	  Normal  NodeHasSufficientPID     12s (x7 over 12s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x9 over 12s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x7 over 12s)  kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  12s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-131853 event: Registered Node newest-cni-131853 in Controller
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  0s                 kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    0s                 kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     0s                 kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 5e 74 77 2c c0 08 06
	[ +17.653920] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3a 94 5b dc 93 2f 08 06
	[Sep17 00:47] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 4a 99 ad fd 64 e5 08 06
	[  +8.270528] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 3d a1 8e 53 f7 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 d9 f3 a3 82 3c 08 06
	[  +0.024713] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 12 62 35 6f cf d7 08 06
	[  +1.316474] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000025] ll header: 00000000: ff ff ff ff ff ff 56 2b cf 06 63 b3 08 06
	[ +38.722633] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe b4 cf fc 4d bb 08 06
	[  +0.045053] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 eb cc df 9a 71 08 06
	[  +0.274823] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e f9 e9 d0 31 5d 08 06
	[  +0.003272] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a a9 fe 24 83 ee 08 06
	[Sep17 00:49] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e d8 43 dc c8 bb 08 06
	[ +23.355275] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 98 b5 e1 f7 f0 08 06
	
	
	==> etcd [412395648f1f] <==
	{"level":"warn","ts":"2025-09-17T00:48:58.907021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.926266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.936943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.957231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.971656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.987248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:59.085205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48568","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:49:09.500346Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:49:09.500437Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"newest-cni-131853","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"error","ts":"2025-09-17T00:49:09.500531Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:49:16.502045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:49:16.503203Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:49:16.503293Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2025-09-17T00:49:16.503412Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:49:16.503442Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504118Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504196Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:49:16.504209Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504126Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504235Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:49:16.504244Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.103.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:49:16.505718Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"error","ts":"2025-09-17T00:49:16.505773Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.103.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:49:16.505796Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-17T00:49:16.505803Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"newest-cni-131853","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> etcd [6107c7200b64] <==
	{"level":"warn","ts":"2025-09-17T00:49:30.530385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.539668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.546657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.554451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.566520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.580593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.590648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.597355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.604986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.613042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.619277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.625827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.632315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.639813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.646232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.652437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.659494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.667446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.673643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.681353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.688171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.701735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.709184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.716164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.768408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52030","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:49:40 up  3:31,  0 users,  load average: 3.31, 3.11, 2.29
	Linux newest-cni-131853 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [32eed4faeb5d] <==
	 > logger="UnhandledError"
	I0917 00:49:31.259346       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:49:31.264710       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:49:31.289106       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:49:31.394758       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:49:31.493172       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:49:31.561885       1 controller.go:667] quota admission added evaluator for: namespaces
	I0917 00:49:31.621940       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 00:49:31.631799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 00:49:31.702762       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.18.223"}
	I0917 00:49:31.719687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.174.205"}
	I0917 00:49:32.149536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 00:49:32.256859       1 handler_proxy.go:99] no RequestInfo found in the context
	W0917 00:49:32.256923       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 00:49:32.256897       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 00:49:32.256963       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0917 00:49:32.257039       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 00:49:32.258056       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 00:49:36.259471       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0917 00:49:38.891210       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:49:39.116300       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:49:39.166667       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4cd391b72ee7] <==
	W0917 00:49:18.755091       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.755170       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.791028       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.805398       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.807745       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.896055       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.935237       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.954285       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.958732       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.985417       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.021133       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.094264       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.110251       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.117956       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.171971       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.174227       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.227093       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.228351       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.303181       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.356568       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.387180       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.414757       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.422158       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.452311       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.467988       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [19204e9fcc80] <==
	I0917 00:49:38.886938       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-131853"
	I0917 00:49:38.887078       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:49:38.890834       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:49:38.893142       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0917 00:49:38.895702       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:49:38.899425       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0917 00:49:38.904540       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:49:38.904545       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0917 00:49:38.909094       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:49:38.910755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:49:38.910779       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:49:38.910788       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:49:38.913526       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:49:38.914008       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:49:38.919181       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:49:38.920017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:49:38.920374       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:49:38.920864       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:49:38.920979       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:49:38.923851       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:49:38.924423       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:49:38.926126       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:49:38.928983       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:49:38.929805       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:49:38.949584       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [973b0a93ff1b] <==
	I0917 00:49:06.711774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:49:06.711845       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:49:06.720244       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:49:06.740187       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:49:06.740200       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:49:06.740326       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:49:06.740435       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-131853"
	I0917 00:49:06.740520       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:49:06.740454       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:49:06.740730       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:49:06.741677       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:49:06.741699       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:49:06.741736       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:49:06.741763       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:49:06.741774       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:49:06.741807       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0917 00:49:06.741818       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:49:06.741985       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:49:06.742070       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0917 00:49:06.742097       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:49:06.742610       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:49:06.744147       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:49:06.746348       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:49:06.753121       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:49:06.763624       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [47f4540029fa] <==
	E0917 00:49:31.975618       1 run.go:72] "command failed" err="failed complete: too many open files"
	
	
	==> kube-scheduler [2e2527866695] <==
	E0917 00:48:59.798739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:48:59.799086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:48:59.799121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:48:59.799210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:48:59.799263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:48:59.799346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:48:59.801167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:48:59.801519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:49:00.625714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0917 00:49:00.660124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:49:00.664450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:49:00.696333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:49:00.859369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0917 00:49:00.871628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0917 00:49:00.943969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:49:00.983416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0917 00:49:00.993644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:49:01.188743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0917 00:49:03.271993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:09.511365       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:49:09.511529       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:09.511563       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:49:09.513306       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:49:09.513379       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:49:09.513402       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bec7c181b9ef] <==
	I0917 00:49:29.741568       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:49:31.222793       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:49:31.222978       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:49:31.230254       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:49:31.230294       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:49:31.230340       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:31.230349       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:31.230394       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:49:31.230423       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:49:31.231619       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:49:31.231665       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:49:31.330376       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:49:31.330435       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:31.330477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.054770    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4ede2c9c905b7419e820b124a59140-ca-certs\") pod \"kube-apiserver-newest-cni-131853\" (UID: \"bc4ede2c9c905b7419e820b124a59140\") " pod="kube-system/kube-apiserver-newest-cni-131853"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.054788    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cb7ebc2a459b3f3143098e48f78896c-ca-certs\") pod \"kube-controller-manager-newest-cni-131853\" (UID: \"5cb7ebc2a459b3f3143098e48f78896c\") " pod="kube-system/kube-controller-manager-newest-cni-131853"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.827882    2981 apiserver.go:52] "Watching apiserver"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.841454    2981 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860320    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjn8b\" (UniqueName: \"kubernetes.io/projected/ac91edc8-07ca-498e-9873-6c53d28f4286-kube-api-access-bjn8b\") pod \"dashboard-metrics-scraper-6ffb444bf9-vgrn6\" (UID: \"ac91edc8-07ca-498e-9873-6c53d28f4286\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vgrn6"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860392    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/124de67d-22ac-4e6b-9629-fd67623b4857-tmp\") pod \"storage-provisioner\" (UID: \"124de67d-22ac-4e6b-9629-fd67623b4857\") " pod="kube-system/storage-provisioner"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860419    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/356edd73-a1c8-46ba-9d89-427ac6916857-xtables-lock\") pod \"kube-proxy-h4hhn\" (UID: \"356edd73-a1c8-46ba-9d89-427ac6916857\") " pod="kube-system/kube-proxy-h4hhn"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860454    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2f475aa4-dfd8-481d-b6d3-fc2560ec1a76-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-qw4jq\" (UID: \"2f475aa4-dfd8-481d-b6d3-fc2560ec1a76\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qw4jq"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860479    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78qw\" (UniqueName: \"kubernetes.io/projected/2f475aa4-dfd8-481d-b6d3-fc2560ec1a76-kube-api-access-x78qw\") pod \"kubernetes-dashboard-855c9754f9-qw4jq\" (UID: \"2f475aa4-dfd8-481d-b6d3-fc2560ec1a76\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qw4jq"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860700    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/356edd73-a1c8-46ba-9d89-427ac6916857-lib-modules\") pod \"kube-proxy-h4hhn\" (UID: \"356edd73-a1c8-46ba-9d89-427ac6916857\") " pod="kube-system/kube-proxy-h4hhn"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860894    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac91edc8-07ca-498e-9873-6c53d28f4286-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vgrn6\" (UID: \"ac91edc8-07ca-498e-9873-6c53d28f4286\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vgrn6"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013317    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013461    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013676    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013801    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.021679    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-131853\" already exists" pod="kube-system/kube-controller-manager-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.022433    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-131853\" already exists" pod="kube-system/kube-scheduler-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.022588    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-131853\" already exists" pod="kube-system/etcd-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.022644    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-131853\" already exists" pod="kube-system/kube-apiserver-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.133173    2981 scope.go:117] "RemoveContainer" containerID="47f4540029fa9b8a5978c078cad6e233b21616905214d9b3ca7b6c90cc269a89"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.133284    2981 scope.go:117] "RemoveContainer" containerID="2a5a3009883a4de98a128063205e5ededeb0fa86d439d50466063f16fe15d037"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.225960    2981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.226048    2981 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.226278    2981 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-ncqck_kube-system(151011c0-44c2-4d33-bbd8-6cacddbc2c2a): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" logger="UnhandledError"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.226350    2981 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-ncqck" podUID="151011c0-44c2-4d33-bbd8-6cacddbc2c2a"
	
	
	==> storage-provisioner [2a5a3009883a] <==
	I0917 00:49:31.955811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0917 00:49:31.959775       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-131853 -n newest-cni-131853
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-131853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-131853 describe pod metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-131853 describe pod metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq: exit status 1 (73.510348ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-ncqck" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-vgrn6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qw4jq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-131853 describe pod metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-131853
helpers_test.go:243: (dbg) docker inspect newest-cni-131853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5",
	        "Created": "2025-09-17T00:48:41.103833985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1062872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:49:20.503481218Z",
	            "FinishedAt": "2025-09-17T00:49:19.702454833Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/hosts",
	        "LogPath": "/var/lib/docker/containers/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5/e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5-json.log",
	        "Name": "/newest-cni-131853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-131853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-131853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e38559e7f80c75d2d5bc38cafb298532e213215e126025397c42effbe9cf33d5",
	                "LowerDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9-init/diff:/var/lib/docker/overlay2/c570dacd810ac8c787e753d7a3ab5a399cb123b70a29f21b9da6ee575027d4fd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09654e3d1dd46f263cb84bb9ef362c0b597e9e7be19d8b58e9533c33df7cfca9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-131853",
	                "Source": "/var/lib/docker/volumes/newest-cni-131853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-131853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-131853",
	                "name.minikube.sigs.k8s.io": "newest-cni-131853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "308a7be2975b571ad72bfad9dec8e91e274c878c0b48665655a0dab61deb5c3a",
	            "SandboxKey": "/var/run/docker/netns/308a7be2975b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-131853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:e9:0d:da:87:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d4557b311633dd1244c0791d913116cec0ac391db4447b38fc8fa55426ee83f0",
	                    "EndpointID": "0fa5902e6086f0adf0085a10e26e169c91ef6030cf4afd479a98285efc377b40",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-131853",
	                        "e38559e7f80c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-131853 -n newest-cni-131853
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-131853 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-131853 logs -n 25: (1.422231286s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-843787 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker                                                                                                                                     │ cert-expiration-843787       │ jenkins │ v1.37.0 │ 17 Sep 25 00:47 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p kubernetes-upgrade-401604                                                                                                                                                                                                                    │ kubernetes-upgrade-401604    │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p cert-expiration-843787                                                                                                                                                                                                                       │ cert-expiration-843787       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ start   │ -p embed-certs-411882 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-411882           │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p disable-driver-mounts-171805                                                                                                                                                                                                                 │ disable-driver-mounts-171805 │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ start   │ -p default-k8s-diff-port-990042 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-990042 │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:49 UTC │
	│ image   │ old-k8s-version-591839 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ pause   │ -p old-k8s-version-591839 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ unpause │ -p old-k8s-version-591839 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p old-k8s-version-591839                                                                                                                                                                                                                       │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ delete  │ -p old-k8s-version-591839                                                                                                                                                                                                                       │ old-k8s-version-591839       │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:48 UTC │
	│ start   │ -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:48 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-131853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ stop    │ -p newest-cni-131853 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-131853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p no-preload-152605 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ image   │ newest-cni-131853 image list --format=json                                                                                                                                                                                                      │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ pause   │ -p newest-cni-131853 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p no-preload-152605 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ unpause │ -p newest-cni-131853 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-131853            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ delete  │ -p no-preload-152605                                                                                                                                                                                                                            │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ addons  │ enable metrics-server -p embed-certs-411882 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-411882           │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	│ delete  │ -p no-preload-152605                                                                                                                                                                                                                            │ no-preload-152605            │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ start   │ -p auto-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker                                                                                                                       │ auto-656031                  │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:49:42
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:49:42.158505 1070764 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:49:42.158804 1070764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:42.158815 1070764 out.go:374] Setting ErrFile to fd 2...
	I0917 00:49:42.158820 1070764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:42.159057 1070764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:49:42.159534 1070764 out.go:368] Setting JSON to false
	I0917 00:49:42.161001 1070764 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12714,"bootTime":1758057468,"procs":342,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:49:42.161101 1070764 start.go:140] virtualization: kvm guest
	I0917 00:49:42.163013 1070764 out.go:179] * [auto-656031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:49:42.164593 1070764 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:49:42.164607 1070764 notify.go:220] Checking for updates...
	I0917 00:49:42.167626 1070764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:49:42.168869 1070764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:49:42.170146 1070764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:49:42.171480 1070764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:49:42.172797 1070764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:49:42.174939 1070764 config.go:182] Loaded profile config "default-k8s-diff-port-990042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:49:42.175057 1070764 config.go:182] Loaded profile config "embed-certs-411882": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:49:42.175147 1070764 config.go:182] Loaded profile config "newest-cni-131853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:49:42.175238 1070764 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:49:42.201100 1070764 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:49:42.201285 1070764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:49:42.264101 1070764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:49:42.253181132 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:49:42.264261 1070764 docker.go:318] overlay module found
	I0917 00:49:42.266694 1070764 out.go:179] * Using the docker driver based on user configuration
	I0917 00:49:42.268517 1070764 start.go:304] selected driver: docker
	I0917 00:49:42.268542 1070764 start.go:918] validating driver "docker" against <nil>
	I0917 00:49:42.268559 1070764 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:49:42.269487 1070764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:49:42.343469 1070764 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:49:42.331680373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:49:42.343687 1070764 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:49:42.343978 1070764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:49:42.346045 1070764 out.go:179] * Using Docker driver with root privileges
	I0917 00:49:42.347461 1070764 cni.go:84] Creating CNI manager for ""
	I0917 00:49:42.347555 1070764 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 00:49:42.347568 1070764 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 00:49:42.347650 1070764 start.go:348] cluster config:
	{Name:auto-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s
}
	I0917 00:49:42.349537 1070764 out.go:179] * Starting "auto-656031" primary control-plane node in "auto-656031" cluster
	I0917 00:49:42.350942 1070764 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:49:42.352206 1070764 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:49:42.353408 1070764 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:49:42.353495 1070764 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:49:42.353514 1070764 cache.go:58] Caching tarball of preloaded images
	I0917 00:49:42.353570 1070764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:49:42.353650 1070764 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:49:42.353666 1070764 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:49:42.353801 1070764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/config.json ...
	I0917 00:49:42.353830 1070764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/config.json: {Name:mkb80edcf6199f7ff53ecfa17ce26933025bffa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:49:42.378501 1070764 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:49:42.378524 1070764 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:49:42.378543 1070764 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:49:42.378572 1070764 start.go:360] acquireMachinesLock for auto-656031: {Name:mk10b4de04c1c73f21f7620669be3c039f733790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:49:42.378682 1070764 start.go:364] duration metric: took 91.221µs to acquireMachinesLock for "auto-656031"
	I0917 00:49:42.378718 1070764 start.go:93] Provisioning new machine with config: &{Name:auto-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:auto-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:49:42.378795 1070764 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> Docker <==
	Sep 17 00:49:28 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e15f186667d29a8e895b49680868e23b056c622f6ca0cad7f7b22fb0ea302822/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e4dba6d985144f48f0503aeb0e1f0e79fc9de67552f55d56fcec426c265ea16e/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14e5d5c51aeca37e736d13de1aa16162adb1733a2b32bac9e35fa5c397962680/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/942935f853593d7285670670bfe4d65765fb1a206cd3e4b61150704c3cf81077/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31127331878f54cfcf5d43438f3e72b9daea7ea263adb76f5031e3273240c2cb/resolv.conf as [nameserver 192.168.103.1 search local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.956552210Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.956639929Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.959605842Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.959650299Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:31 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:31.984303088Z" level=info msg="ignoring event" container=2a5a3009883a4de98a128063205e5ededeb0fa86d439d50466063f16fe15d037 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:49:32 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:32.002021215Z" level=info msg="ignoring event" container=47f4540029fa9b8a5978c078cad6e233b21616905214d9b3ca7b6c90cc269a89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:49:40 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.221401835Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.221441343Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.225341940Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.225387977Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.378618515Z" level=info msg="ignoring event" container=b77446719a586de2638a408bb451400282ef601b86280a0b30a68910a278ceaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 00:49:41 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f8b9329dcb4533a297b52546ac7e13bf1996fd563d8b7af9c5f949e60f38079/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:49:41 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bd55602e4cf6ddd5067343dc89ac0bb06fb32ee909ffdf2cdeef73cde9b45b4/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local local europe-west1-b.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.621069684Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.681813318Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.681985744Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 17 00:49:41 newest-cni-131853 cri-dockerd[1138]: time="2025-09-17T00:49:41Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 17 00:49:41 newest-cni-131853 dockerd[814]: time="2025-09-17T00:49:41.941814046Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b77446719a586       6e38f40d628db       2 seconds ago       Exited              storage-provisioner       3                   31127331878f5       storage-provisioner
	06531a5b3abea       df0860106674d       2 seconds ago       Running             kube-proxy                3                   e4dba6d985144       kube-proxy-h4hhn
	c1450243f4737       52546a367cc9e       12 seconds ago      Running             coredns                   1                   942935f853593       coredns-66bc5c9577-2ffbr
	47f4540029fa9       df0860106674d       12 seconds ago      Exited              kube-proxy                2                   e4dba6d985144       kube-proxy-h4hhn
	bec7c181b9ef1       46169d968e920       14 seconds ago      Running             kube-scheduler            1                   e15f186667d29       kube-scheduler-newest-cni-131853
	6107c7200b643       5f1f5298c888d       15 seconds ago      Running             etcd                      1                   95cde05f4d808       etcd-newest-cni-131853
	19204e9fcc801       a0af72f2ec6d6       15 seconds ago      Running             kube-controller-manager   1                   ca39c0273afbe       kube-controller-manager-newest-cni-131853
	32eed4faeb5d2       90550c43ad2bc       15 seconds ago      Running             kube-apiserver            1                   cb4d8f1af1b60       kube-apiserver-newest-cni-131853
	a73eda8fced35       52546a367cc9e       35 seconds ago      Exited              coredns                   0                   fb7300b431387       coredns-66bc5c9577-2ffbr
	412395648f1ff       5f1f5298c888d       47 seconds ago      Exited              etcd                      0                   47747320ac4eb       etcd-newest-cni-131853
	2e25278666954       46169d968e920       47 seconds ago      Exited              kube-scheduler            0                   92a4b2dd76bbc       kube-scheduler-newest-cni-131853
	973b0a93ff1b3       a0af72f2ec6d6       47 seconds ago      Exited              kube-controller-manager   0                   8b5f2c7f5fbf5       kube-controller-manager-newest-cni-131853
	4cd391b72ee73       90550c43ad2bc       47 seconds ago      Exited              kube-apiserver            0                   13a111ccad0b5       kube-apiserver-newest-cni-131853
	
	
	==> coredns [a73eda8fced3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51471 - 21856 "HINFO IN 1671484569658969291.2329482321069416358. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041513461s
	
	
	==> coredns [c1450243f473] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36329 - 52327 "HINFO IN 6798412162064350935.3265723935977831039. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023614585s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               newest-cni-131853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-131853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=newest-cni-131853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_49_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:48:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-131853
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:49:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:49:40 +0000   Wed, 17 Sep 2025 00:48:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-131853
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 71e9e84fc0994429ac85cf071094fa5e
	  System UUID:                744a73e7-ffc9-4fe9-8238-88dd32596d74
	  Boot ID:                    38a5d1c6-3b0d-42c9-b748-79065a969107
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-2ffbr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     36s
	  kube-system                 etcd-newest-cni-131853                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kube-apiserver-newest-cni-131853              250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-newest-cni-131853     200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-h4hhn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-newest-cni-131853              100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 metrics-server-746fcd58dc-ncqck               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vgrn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qw4jq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node newest-cni-131853 event: Registered Node newest-cni-131853 in Controller
	  Normal  Starting                 15s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15s (x9 over 15s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s (x7 over 15s)  kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s (x7 over 15s)  kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-131853 event: Registered Node newest-cni-131853 in Controller
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node newest-cni-131853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node newest-cni-131853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node newest-cni-131853 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 5e 74 77 2c c0 08 06
	[ +17.653920] IPv4: martian source 10.244.0.1 from 10.244.0.9, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3a 94 5b dc 93 2f 08 06
	[Sep17 00:47] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 4a 99 ad fd 64 e5 08 06
	[  +8.270528] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e 3d a1 8e 53 f7 08 06
	[Sep17 00:48] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 d9 f3 a3 82 3c 08 06
	[  +0.024713] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 12 62 35 6f cf d7 08 06
	[  +1.316474] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000025] ll header: 00000000: ff ff ff ff ff ff 56 2b cf 06 63 b3 08 06
	[ +38.722633] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fe b4 cf fc 4d bb 08 06
	[  +0.045053] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 eb cc df 9a 71 08 06
	[  +0.274823] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e f9 e9 d0 31 5d 08 06
	[  +0.003272] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a a9 fe 24 83 ee 08 06
	[Sep17 00:49] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e d8 43 dc c8 bb 08 06
	[ +23.355275] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 98 b5 e1 f7 f0 08 06
	
	
	==> etcd [412395648f1f] <==
	{"level":"warn","ts":"2025-09-17T00:48:58.907021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.926266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.936943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.957231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.971656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:58.987248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:48:59.085205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48568","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:49:09.500346Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:49:09.500437Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"newest-cni-131853","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"error","ts":"2025-09-17T00:49:09.500531Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:49:16.502045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:49:16.503203Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:49:16.503293Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2025-09-17T00:49:16.503412Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:49:16.503442Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504118Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504196Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:49:16.504209Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504126Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:49:16.504235Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:49:16.504244Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.103.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:49:16.505718Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"error","ts":"2025-09-17T00:49:16.505773Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.103.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:49:16.505796Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-17T00:49:16.505803Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"newest-cni-131853","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> etcd [6107c7200b64] <==
	{"level":"warn","ts":"2025-09-17T00:49:30.530385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.539668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.546657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.554451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.566520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.580593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.590648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.597355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.604986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.613042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.619277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.625827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.632315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.639813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.646232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.652437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.659494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.667446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.673643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.681353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.688171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.701735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.709184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.716164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:49:30.768408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52030","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:49:43 up  3:31,  0 users,  load average: 4.40, 3.34, 2.37
	Linux newest-cni-131853 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [32eed4faeb5d] <==
	I0917 00:49:31.259346       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:49:31.264710       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:49:31.289106       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:49:31.394758       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:49:31.493172       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:49:31.561885       1 controller.go:667] quota admission added evaluator for: namespaces
	I0917 00:49:31.621940       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 00:49:31.631799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 00:49:31.702762       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.18.223"}
	I0917 00:49:31.719687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.174.205"}
	I0917 00:49:32.149536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 00:49:32.256859       1 handler_proxy.go:99] no RequestInfo found in the context
	W0917 00:49:32.256923       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 00:49:32.256897       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0917 00:49:32.256963       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0917 00:49:32.257039       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 00:49:32.258056       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 00:49:36.259471       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0917 00:49:38.891210       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:49:39.116300       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:49:39.166667       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 00:49:42.105657       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [4cd391b72ee7] <==
	W0917 00:49:18.755091       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.755170       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.791028       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.805398       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.807745       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.896055       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.935237       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.954285       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.958732       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:18.985417       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.021133       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.094264       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.110251       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.117956       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.171971       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.174227       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.227093       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.228351       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.303181       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.356568       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.387180       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.414757       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.422158       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.452311       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 00:49:19.467988       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [19204e9fcc80] <==
	I0917 00:49:38.886938       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-131853"
	I0917 00:49:38.887078       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:49:38.890834       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:49:38.893142       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0917 00:49:38.895702       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:49:38.899425       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0917 00:49:38.904540       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:49:38.904545       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0917 00:49:38.909094       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:49:38.910755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:49:38.910779       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:49:38.910788       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:49:38.913526       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:49:38.914008       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:49:38.919181       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:49:38.920017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:49:38.920374       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:49:38.920864       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:49:38.920979       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:49:38.923851       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:49:38.924423       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:49:38.926126       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:49:38.928983       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:49:38.929805       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:49:38.949584       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [973b0a93ff1b] <==
	I0917 00:49:06.711774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:49:06.711845       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:49:06.720244       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:49:06.740187       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:49:06.740200       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:49:06.740326       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:49:06.740435       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-131853"
	I0917 00:49:06.740520       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 00:49:06.740454       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:49:06.740730       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:49:06.741677       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:49:06.741699       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:49:06.741736       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:49:06.741763       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:49:06.741774       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:49:06.741807       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0917 00:49:06.741818       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:49:06.741985       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:49:06.742070       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0917 00:49:06.742097       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:49:06.742610       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:49:06.744147       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:49:06.746348       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:49:06.753121       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:49:06.763624       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [06531a5b3abe] <==
	I0917 00:49:41.429994       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:49:41.503492       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:49:41.604101       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:49:41.604142       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0917 00:49:41.604256       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:49:41.635147       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:49:41.635227       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:49:41.641599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:49:41.642413       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:49:41.642442       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:49:41.644793       1 config.go:200] "Starting service config controller"
	I0917 00:49:41.644818       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:49:41.644789       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:49:41.644850       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:49:41.645129       1 config.go:309] "Starting node config controller"
	I0917 00:49:41.645149       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:49:41.645156       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:49:41.645137       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:49:41.645165       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:49:41.745081       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:49:41.745145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:49:41.745197       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [47f4540029fa] <==
	E0917 00:49:31.975618       1 run.go:72] "command failed" err="failed complete: too many open files"
	
	
	==> kube-scheduler [2e2527866695] <==
	E0917 00:48:59.798739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:48:59.799086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:48:59.799121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:48:59.799210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:48:59.799263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:48:59.799346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:48:59.801167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:48:59.801519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:49:00.625714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0917 00:49:00.660124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:49:00.664450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:49:00.696333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:49:00.859369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0917 00:49:00.871628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0917 00:49:00.943969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:49:00.983416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0917 00:49:00.993644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:49:01.188743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0917 00:49:03.271993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:09.511365       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:49:09.511529       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:09.511563       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:49:09.513306       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:49:09.513379       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:49:09.513402       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bec7c181b9ef] <==
	I0917 00:49:29.741568       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:49:31.222793       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:49:31.222978       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:49:31.230254       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:49:31.230294       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:49:31.230340       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:31.230349       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:31.230394       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:49:31.230423       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:49:31.231619       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:49:31.231665       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:49:31.330376       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:49:31.330435       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:49:31.330477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860479    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78qw\" (UniqueName: \"kubernetes.io/projected/2f475aa4-dfd8-481d-b6d3-fc2560ec1a76-kube-api-access-x78qw\") pod \"kubernetes-dashboard-855c9754f9-qw4jq\" (UID: \"2f475aa4-dfd8-481d-b6d3-fc2560ec1a76\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qw4jq"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860700    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/356edd73-a1c8-46ba-9d89-427ac6916857-lib-modules\") pod \"kube-proxy-h4hhn\" (UID: \"356edd73-a1c8-46ba-9d89-427ac6916857\") " pod="kube-system/kube-proxy-h4hhn"
	Sep 17 00:49:40 newest-cni-131853 kubelet[2981]: I0917 00:49:40.860894    2981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac91edc8-07ca-498e-9873-6c53d28f4286-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vgrn6\" (UID: \"ac91edc8-07ca-498e-9873-6c53d28f4286\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vgrn6"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013317    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013461    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013676    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.013801    2981 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.021679    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-131853\" already exists" pod="kube-system/kube-controller-manager-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.022433    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-131853\" already exists" pod="kube-system/kube-scheduler-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.022588    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-131853\" already exists" pod="kube-system/etcd-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.022644    2981 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-131853\" already exists" pod="kube-system/kube-apiserver-newest-cni-131853"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.133173    2981 scope.go:117] "RemoveContainer" containerID="47f4540029fa9b8a5978c078cad6e233b21616905214d9b3ca7b6c90cc269a89"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: I0917 00:49:41.133284    2981 scope.go:117] "RemoveContainer" containerID="2a5a3009883a4de98a128063205e5ededeb0fa86d439d50466063f16fe15d037"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.225960    2981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.226048    2981 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.226278    2981 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-ncqck_kube-system(151011c0-44c2-4d33-bbd8-6cacddbc2c2a): ErrImagePull: Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" logger="UnhandledError"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.226350    2981 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-ncqck" podUID="151011c0-44c2-4d33-bbd8-6cacddbc2c2a"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.684689    2981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.684759    2981 kuberuntime_image.go:43] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.685056    2981 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-6ffb444bf9-vgrn6_kubernetes-dashboard(ac91edc8-07ca-498e-9873-6c53d28f4286): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
	Sep 17 00:49:41 newest-cni-131853 kubelet[2981]: E0917 00:49:41.685133    2981 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vgrn6" podUID="ac91edc8-07ca-498e-9873-6c53d28f4286"
	Sep 17 00:49:42 newest-cni-131853 kubelet[2981]: E0917 00:49:42.034707    2981 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vgrn6" podUID="ac91edc8-07ca-498e-9873-6c53d28f4286"
	Sep 17 00:49:42 newest-cni-131853 kubelet[2981]: I0917 00:49:42.057640    2981 scope.go:117] "RemoveContainer" containerID="2a5a3009883a4de98a128063205e5ededeb0fa86d439d50466063f16fe15d037"
	Sep 17 00:49:42 newest-cni-131853 kubelet[2981]: I0917 00:49:42.058071    2981 scope.go:117] "RemoveContainer" containerID="b77446719a586de2638a408bb451400282ef601b86280a0b30a68910a278ceaf"
	Sep 17 00:49:42 newest-cni-131853 kubelet[2981]: E0917 00:49:42.058223    2981 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(124de67d-22ac-4e6b-9629-fd67623b4857)\"" pod="kube-system/storage-provisioner" podUID="124de67d-22ac-4e6b-9629-fd67623b4857"
	
	
	==> storage-provisioner [b77446719a58] <==
	I0917 00:49:41.346678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0917 00:49:41.356279       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-131853 -n newest-cni-131853
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-131853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-131853 describe pod metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-131853 describe pod metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq: exit status 1 (77.110368ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-ncqck" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-vgrn6" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qw4jq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-131853 describe pod metrics-server-746fcd58dc-ncqck dashboard-metrics-scraper-6ffb444bf9-vgrn6 kubernetes-dashboard-855c9754f9-qw4jq: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (276.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0917 00:52:08.125607  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p bridge-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: exit status 80 (4m36.584201885s)

                                                
                                                
-- stdout --
	* [bridge-656031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "bridge-656031" primary control-plane node in "bridge-656031" cluster
	* Pulling base image v0.0.48 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:52:06.572258 1116630 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:52:06.572496 1116630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:52:06.572505 1116630 out.go:374] Setting ErrFile to fd 2...
	I0917 00:52:06.572509 1116630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:52:06.572685 1116630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:52:06.573228 1116630 out.go:368] Setting JSON to false
	I0917 00:52:06.574407 1116630 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12859,"bootTime":1758057468,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:52:06.574496 1116630 start.go:140] virtualization: kvm guest
	I0917 00:52:06.576821 1116630 out.go:179] * [bridge-656031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:52:06.578441 1116630 notify.go:220] Checking for updates...
	I0917 00:52:06.578499 1116630 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:52:06.580122 1116630 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:52:06.581415 1116630 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:52:06.582695 1116630 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:52:06.583953 1116630 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:52:06.585088 1116630 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:52:06.586708 1116630 config.go:182] Loaded profile config "enable-default-cni-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:06.586800 1116630 config.go:182] Loaded profile config "flannel-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:06.586881 1116630 config.go:182] Loaded profile config "kindnet-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:06.587022 1116630 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:52:06.611762 1116630 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:52:06.611884 1116630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:52:06.677128 1116630 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:52:06.665599546 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:52:06.677239 1116630 docker.go:318] overlay module found
	I0917 00:52:06.679187 1116630 out.go:179] * Using the docker driver based on user configuration
	I0917 00:52:06.680723 1116630 start.go:304] selected driver: docker
	I0917 00:52:06.680740 1116630 start.go:918] validating driver "docker" against <nil>
	I0917 00:52:06.680752 1116630 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:52:06.681464 1116630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:52:06.739590 1116630 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:52:06.72961765 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[N
ame:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:52:06.739750 1116630 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:52:06.739999 1116630 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:52:06.741926 1116630 out.go:179] * Using Docker driver with root privileges
	I0917 00:52:06.743241 1116630 cni.go:84] Creating CNI manager for "bridge"
	I0917 00:52:06.743263 1116630 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 00:52:06.743365 1116630 start.go:348] cluster config:
	{Name:bridge-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0917 00:52:06.745010 1116630 out.go:179] * Starting "bridge-656031" primary control-plane node in "bridge-656031" cluster
	I0917 00:52:06.747150 1116630 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:52:06.748723 1116630 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:52:06.749953 1116630 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:52:06.750014 1116630 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:52:06.750017 1116630 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:52:06.750028 1116630 cache.go:58] Caching tarball of preloaded images
	I0917 00:52:06.750138 1116630 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:52:06.750155 1116630 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:52:06.750284 1116630 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/config.json ...
	I0917 00:52:06.750313 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/config.json: {Name:mk4c904d416d697c70accb9702408934e5904b22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:06.773474 1116630 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:52:06.773496 1116630 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:52:06.773518 1116630 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:52:06.773551 1116630 start.go:360] acquireMachinesLock for bridge-656031: {Name:mkc40ad5f42eaf3a76d06ce464585b8bbb08838a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:52:06.773676 1116630 start.go:364] duration metric: took 100.297µs to acquireMachinesLock for "bridge-656031"
	I0917 00:52:06.773708 1116630 start.go:93] Provisioning new machine with config: &{Name:bridge-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:52:06.773817 1116630 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:52:06.775884 1116630 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:52:06.776227 1116630 start.go:159] libmachine.API.Create for "bridge-656031" (driver="docker")
	I0917 00:52:06.776266 1116630 client.go:168] LocalClient.Create starting
	I0917 00:52:06.776341 1116630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0917 00:52:06.776384 1116630 main.go:141] libmachine: Decoding PEM data...
	I0917 00:52:06.776407 1116630 main.go:141] libmachine: Parsing certificate...
	I0917 00:52:06.776492 1116630 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0917 00:52:06.776531 1116630 main.go:141] libmachine: Decoding PEM data...
	I0917 00:52:06.776550 1116630 main.go:141] libmachine: Parsing certificate...
	I0917 00:52:06.776937 1116630 cli_runner.go:164] Run: docker network inspect bridge-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:52:06.794173 1116630 cli_runner.go:211] docker network inspect bridge-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:52:06.794257 1116630 network_create.go:284] running [docker network inspect bridge-656031] to gather additional debugging logs...
	I0917 00:52:06.794282 1116630 cli_runner.go:164] Run: docker network inspect bridge-656031
	W0917 00:52:06.811924 1116630 cli_runner.go:211] docker network inspect bridge-656031 returned with exit code 1
	I0917 00:52:06.811988 1116630 network_create.go:287] error running [docker network inspect bridge-656031]: docker network inspect bridge-656031: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-656031 not found
	I0917 00:52:06.812002 1116630 network_create.go:289] output of [docker network inspect bridge-656031]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-656031 not found
	
	** /stderr **
	I0917 00:52:06.812124 1116630 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:52:06.831597 1116630 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab651df73000 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:63:f8:73:0d:ee} reservation:<nil>}
	I0917 00:52:06.832240 1116630 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-91db5a27742d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:6c:9c:db:5a:d4} reservation:<nil>}
	I0917 00:52:06.833637 1116630 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0515bd298a94 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:91:5b:dc:7a:d8} reservation:<nil>}
	I0917 00:52:06.835006 1116630 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fe53275754dc IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:d5:fc:70:c3:89} reservation:<nil>}
	I0917 00:52:06.835772 1116630 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b5745d1cb4ca IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:ee:e8:cd:f9:ec} reservation:<nil>}
	I0917 00:52:06.836511 1116630 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7a380}
	I0917 00:52:06.836541 1116630 network_create.go:124] attempt to create docker network bridge-656031 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0917 00:52:06.836598 1116630 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-656031 bridge-656031
	I0917 00:52:06.902665 1116630 network_create.go:108] docker network bridge-656031 192.168.94.0/24 created
	I0917 00:52:06.902701 1116630 kic.go:121] calculated static IP "192.168.94.2" for the "bridge-656031" container
	I0917 00:52:06.902766 1116630 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:52:06.920735 1116630 cli_runner.go:164] Run: docker volume create bridge-656031 --label name.minikube.sigs.k8s.io=bridge-656031 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:52:06.939449 1116630 oci.go:103] Successfully created a docker volume bridge-656031
	I0917 00:52:06.939527 1116630 cli_runner.go:164] Run: docker run --rm --name bridge-656031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-656031 --entrypoint /usr/bin/test -v bridge-656031:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:52:07.359378 1116630 oci.go:107] Successfully prepared a docker volume bridge-656031
	I0917 00:52:07.359425 1116630 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:52:07.359455 1116630 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:52:07.359522 1116630 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-656031:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:52:11.221987 1116630 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-656031:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.862406064s)
	I0917 00:52:11.222027 1116630 kic.go:203] duration metric: took 3.862565549s to extract preloaded images to volume ...
	W0917 00:52:11.222127 1116630 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:52:11.222170 1116630 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:52:11.222217 1116630 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:52:11.277607 1116630 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-656031 --name bridge-656031 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-656031 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-656031 --network bridge-656031 --ip 192.168.94.2 --volume bridge-656031:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:52:11.555338 1116630 cli_runner.go:164] Run: docker container inspect bridge-656031 --format={{.State.Running}}
	I0917 00:52:11.575854 1116630 cli_runner.go:164] Run: docker container inspect bridge-656031 --format={{.State.Status}}
	I0917 00:52:11.595764 1116630 cli_runner.go:164] Run: docker exec bridge-656031 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:52:11.644703 1116630 oci.go:144] the created container "bridge-656031" has a running status.
	I0917 00:52:11.644756 1116630 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa...
	I0917 00:52:11.786848 1116630 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:52:11.817388 1116630 cli_runner.go:164] Run: docker container inspect bridge-656031 --format={{.State.Status}}
	I0917 00:52:11.845110 1116630 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:52:11.845132 1116630 kic_runner.go:114] Args: [docker exec --privileged bridge-656031 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:52:11.898971 1116630 cli_runner.go:164] Run: docker container inspect bridge-656031 --format={{.State.Status}}
	I0917 00:52:11.920440 1116630 machine.go:93] provisionDockerMachine start ...
	I0917 00:52:11.920562 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:11.939039 1116630 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:11.939366 1116630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0917 00:52:11.939388 1116630 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:52:12.080392 1116630 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-656031
	
	I0917 00:52:12.080426 1116630 ubuntu.go:182] provisioning hostname "bridge-656031"
	I0917 00:52:12.080493 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:12.101869 1116630 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:12.102228 1116630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0917 00:52:12.102253 1116630 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-656031 && echo "bridge-656031" | sudo tee /etc/hostname
	I0917 00:52:12.269591 1116630 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-656031
	
	I0917 00:52:12.269704 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:12.291791 1116630 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:12.292043 1116630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0917 00:52:12.292067 1116630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-656031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-656031/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-656031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:52:12.433154 1116630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:52:12.433185 1116630 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:52:12.433217 1116630 ubuntu.go:190] setting up certificates
	I0917 00:52:12.433236 1116630 provision.go:84] configureAuth start
	I0917 00:52:12.433295 1116630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-656031
	I0917 00:52:12.456472 1116630 provision.go:143] copyHostCerts
	I0917 00:52:12.456533 1116630 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:52:12.456544 1116630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:52:12.456614 1116630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:52:12.456715 1116630 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:52:12.456724 1116630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:52:12.456752 1116630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:52:12.456809 1116630 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:52:12.456817 1116630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:52:12.456839 1116630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:52:12.456892 1116630 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.bridge-656031 san=[127.0.0.1 192.168.94.2 bridge-656031 localhost minikube]
	I0917 00:52:12.603320 1116630 provision.go:177] copyRemoteCerts
	I0917 00:52:12.603381 1116630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:52:12.603420 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:12.620839 1116630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa Username:docker}
	I0917 00:52:12.718986 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:52:12.748661 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:52:12.775207 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:52:12.803311 1116630 provision.go:87] duration metric: took 370.05984ms to configureAuth
	I0917 00:52:12.803338 1116630 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:52:12.803493 1116630 config.go:182] Loaded profile config "bridge-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:12.803541 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:12.826289 1116630 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:12.826603 1116630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0917 00:52:12.826625 1116630 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:52:12.971920 1116630 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:52:12.971950 1116630 ubuntu.go:71] root file system type: overlay
	I0917 00:52:12.972085 1116630 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:52:12.972168 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:12.991374 1116630 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:12.991654 1116630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0917 00:52:12.991758 1116630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:52:13.145399 1116630 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:52:13.145486 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:13.163154 1116630 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:13.163385 1116630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0917 00:52:13.163411 1116630 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:52:14.409666 1116630 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-17 00:52:13.142324802 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 00:52:14.409703 1116630 machine.go:96] duration metric: took 2.489223258s to provisionDockerMachine
	I0917 00:52:14.409718 1116630 client.go:171] duration metric: took 7.633444223s to LocalClient.Create
	I0917 00:52:14.409741 1116630 start.go:167] duration metric: took 7.633516911s to libmachine.API.Create "bridge-656031"
	I0917 00:52:14.409753 1116630 start.go:293] postStartSetup for "bridge-656031" (driver="docker")
	I0917 00:52:14.409766 1116630 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:52:14.409836 1116630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:52:14.409880 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:14.431210 1116630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa Username:docker}
	I0917 00:52:14.534727 1116630 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:52:14.539691 1116630 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:52:14.539729 1116630 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:52:14.539743 1116630 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:52:14.539751 1116630 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:52:14.539765 1116630 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:52:14.539824 1116630 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:52:14.539962 1116630 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:52:14.540089 1116630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:52:14.551484 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:52:14.583993 1116630 start.go:296] duration metric: took 174.222179ms for postStartSetup
	I0917 00:52:14.584487 1116630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-656031
	I0917 00:52:14.603560 1116630 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/config.json ...
	I0917 00:52:14.603874 1116630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:52:14.603944 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:14.625241 1116630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa Username:docker}
	I0917 00:52:14.724775 1116630 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:52:14.730463 1116630 start.go:128] duration metric: took 7.956627715s to createHost
	I0917 00:52:14.730493 1116630 start.go:83] releasing machines lock for "bridge-656031", held for 7.956802537s
	I0917 00:52:14.730582 1116630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-656031
	I0917 00:52:14.753166 1116630 ssh_runner.go:195] Run: cat /version.json
	I0917 00:52:14.753206 1116630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:52:14.753223 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:14.753265 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:14.773864 1116630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa Username:docker}
	I0917 00:52:14.774617 1116630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa Username:docker}
	I0917 00:52:14.960157 1116630 ssh_runner.go:195] Run: systemctl --version
	I0917 00:52:14.966010 1116630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:52:14.971352 1116630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:52:15.007282 1116630 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:52:15.007360 1116630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:52:15.040694 1116630 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:52:15.040726 1116630 start.go:495] detecting cgroup driver to use...
	I0917 00:52:15.040756 1116630 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:52:15.040871 1116630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:52:15.063737 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:52:15.076520 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:52:15.087775 1116630 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:52:15.087840 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:52:15.098996 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:52:15.111963 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:52:15.122966 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:52:15.138693 1116630 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:52:15.150128 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:52:15.161096 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:52:15.172106 1116630 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:52:15.183807 1116630 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:52:15.193375 1116630 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:52:15.202812 1116630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:15.276775 1116630 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:52:15.365431 1116630 start.go:495] detecting cgroup driver to use...
	I0917 00:52:15.365484 1116630 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:52:15.365541 1116630 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:52:15.381374 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:52:15.394365 1116630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:52:15.412142 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:52:15.432275 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:52:15.456929 1116630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:52:15.493052 1116630 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:52:15.498145 1116630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:52:15.513137 1116630 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:52:15.533517 1116630 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:52:15.652520 1116630 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:52:15.732607 1116630 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:52:15.732707 1116630 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:52:15.761296 1116630 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:52:15.778953 1116630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:15.865112 1116630 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:52:16.657684 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:52:16.673533 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:52:16.690405 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:52:16.707601 1116630 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:52:16.798146 1116630 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:52:16.888321 1116630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:16.977424 1116630 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:52:17.001417 1116630 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:52:17.017146 1116630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:17.107679 1116630 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:52:17.199062 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:52:17.214953 1116630 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:52:17.215038 1116630 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:52:17.219835 1116630 start.go:563] Will wait 60s for crictl version
	I0917 00:52:17.219962 1116630 ssh_runner.go:195] Run: which crictl
	I0917 00:52:17.224361 1116630 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:52:17.270607 1116630 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:52:17.270709 1116630 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:52:17.305338 1116630 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:52:17.334052 1116630 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:52:17.334152 1116630 cli_runner.go:164] Run: docker network inspect bridge-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:52:17.355031 1116630 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0917 00:52:17.361137 1116630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:52:17.375674 1116630 kubeadm.go:875] updating cluster {Name:bridge-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:52:17.375826 1116630 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:52:17.375889 1116630 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:52:17.402077 1116630 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:52:17.402100 1116630 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:52:17.402164 1116630 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:52:17.431759 1116630 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:52:17.431786 1116630 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:52:17.431801 1116630 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0917 00:52:17.432067 1116630 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-656031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0917 00:52:17.432147 1116630 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:52:17.509162 1116630 cni.go:84] Creating CNI manager for "bridge"
	I0917 00:52:17.509236 1116630 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:52:17.509290 1116630 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-656031 NodeName:bridge-656031 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:52:17.509459 1116630 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "bridge-656031"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:52:17.509527 1116630 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:52:17.519700 1116630 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:52:17.519769 1116630 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:52:17.529553 1116630 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 00:52:17.549965 1116630 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:52:17.569122 1116630 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 00:52:17.588773 1116630 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:52:17.592995 1116630 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:52:17.605359 1116630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:17.692861 1116630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:52:17.718102 1116630 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031 for IP: 192.168.94.2
	I0917 00:52:17.718131 1116630 certs.go:194] generating shared ca certs ...
	I0917 00:52:17.718158 1116630 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:17.718325 1116630 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:52:17.718380 1116630 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:52:17.718393 1116630 certs.go:256] generating profile certs ...
	I0917 00:52:17.718466 1116630 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/client.key
	I0917 00:52:17.718481 1116630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/client.crt with IP's: []
	I0917 00:52:17.857096 1116630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/client.crt ...
	I0917 00:52:17.857129 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/client.crt: {Name:mk3175c4ed9f44ce94814438ba9b5c1c4c9a277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:17.857329 1116630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/client.key ...
	I0917 00:52:17.857346 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/client.key: {Name:mk0fe18dc72d22957cebf1d67c8be73e2de4054f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:17.857461 1116630 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.key.031cc64d
	I0917 00:52:17.857484 1116630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.crt.031cc64d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0917 00:52:18.166486 1116630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.crt.031cc64d ...
	I0917 00:52:18.166522 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.crt.031cc64d: {Name:mk600d8035c7945e1f50998053abca807a5f38e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:18.166753 1116630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.key.031cc64d ...
	I0917 00:52:18.166777 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.key.031cc64d: {Name:mk7aca085aeb46fe57de8fcb0ea4b81ed2a8913d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:18.166900 1116630 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.crt.031cc64d -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.crt
	I0917 00:52:18.167025 1116630 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.key.031cc64d -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.key
	I0917 00:52:18.167108 1116630 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.key
	I0917 00:52:18.167131 1116630 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.crt with IP's: []
	I0917 00:52:18.577191 1116630 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.crt ...
	I0917 00:52:18.577219 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.crt: {Name:mk9b27a262f44b5ee2725b94c74544330794fb30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:18.577399 1116630 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.key ...
	I0917 00:52:18.577412 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.key: {Name:mk4bc62759087eda0ce22bfa042a8f3368b201b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:18.577594 1116630 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:52:18.577631 1116630 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:52:18.577641 1116630 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:52:18.577662 1116630 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:52:18.577684 1116630 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:52:18.577704 1116630 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:52:18.577759 1116630 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:52:18.578376 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:52:18.606373 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:52:18.634883 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:52:18.663676 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:52:18.691322 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 00:52:18.722984 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:52:18.752959 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:52:18.781798 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/bridge-656031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:52:18.810448 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:52:18.841954 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:52:18.874069 1116630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:52:18.904593 1116630 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:52:18.930795 1116630 ssh_runner.go:195] Run: openssl version
	I0917 00:52:18.938090 1116630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:52:18.950721 1116630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:52:18.954867 1116630 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:52:18.954926 1116630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:52:18.964386 1116630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:52:18.975325 1116630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:52:18.987965 1116630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:52:18.993159 1116630 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:52:18.993224 1116630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:52:19.002036 1116630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:52:19.015311 1116630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:52:19.027563 1116630 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:52:19.032064 1116630 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:52:19.032131 1116630 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:52:19.040407 1116630 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:52:19.052321 1116630 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:52:19.056610 1116630 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:52:19.056674 1116630 kubeadm.go:392] StartCluster: {Name:bridge-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:52:19.056789 1116630 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:52:19.078640 1116630 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:52:19.088901 1116630 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:52:19.099970 1116630 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:52:19.100033 1116630 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:52:19.110334 1116630 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:52:19.110354 1116630 kubeadm.go:157] found existing configuration files:
	
	I0917 00:52:19.110396 1116630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:52:19.120265 1116630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:52:19.120347 1116630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:52:19.130316 1116630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:52:19.141076 1116630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:52:19.141159 1116630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:52:19.154617 1116630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:52:19.165298 1116630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:52:19.165362 1116630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:52:19.176098 1116630 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:52:19.187228 1116630 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:52:19.187292 1116630 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:52:19.198040 1116630 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:52:19.274145 1116630 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 00:52:19.345711 1116630 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:52:36.401934 1116630 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:52:36.402013 1116630 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:52:36.402169 1116630 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:52:36.402317 1116630 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 00:52:36.402430 1116630 kubeadm.go:310] OS: Linux
	I0917 00:52:36.402510 1116630 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:52:36.402578 1116630 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:52:36.402669 1116630 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:52:36.402739 1116630 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:52:36.402807 1116630 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:52:36.402892 1116630 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:52:36.403003 1116630 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:52:36.403063 1116630 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 00:52:36.403126 1116630 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:52:36.403273 1116630 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:52:36.403391 1116630 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:52:36.403488 1116630 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:52:36.406104 1116630 out.go:252]   - Generating certificates and keys ...
	I0917 00:52:36.406194 1116630 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:52:36.406300 1116630 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:52:36.406364 1116630 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:52:36.406411 1116630 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:52:36.406461 1116630 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:52:36.406506 1116630 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:52:36.406550 1116630 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:52:36.406650 1116630 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-656031 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0917 00:52:36.406697 1116630 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:52:36.406788 1116630 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-656031 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0917 00:52:36.406842 1116630 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:52:36.406897 1116630 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:52:36.406959 1116630 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:52:36.407007 1116630 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:52:36.407053 1116630 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:52:36.407130 1116630 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:52:36.407219 1116630 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:52:36.407312 1116630 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:52:36.407393 1116630 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:52:36.407499 1116630 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:52:36.407579 1116630 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:52:36.408704 1116630 out.go:252]   - Booting up control plane ...
	I0917 00:52:36.408779 1116630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:52:36.408846 1116630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:52:36.408930 1116630 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:52:36.409059 1116630 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:52:36.409140 1116630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:52:36.409232 1116630 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:52:36.409333 1116630 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:52:36.409374 1116630 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:52:36.409493 1116630 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:52:36.409602 1116630 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:52:36.409686 1116630 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 5.001097079s
	I0917 00:52:36.409798 1116630 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:52:36.409901 1116630 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0917 00:52:36.410026 1116630 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:52:36.410134 1116630 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:52:36.410257 1116630 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.666917389s
	I0917 00:52:36.410318 1116630 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.416044357s
	I0917 00:52:36.410376 1116630 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.002035536s
	I0917 00:52:36.410509 1116630 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:52:36.410669 1116630 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:52:36.410750 1116630 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:52:36.411092 1116630 kubeadm.go:310] [mark-control-plane] Marking the node bridge-656031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:52:36.411181 1116630 kubeadm.go:310] [bootstrap-token] Using token: 4uu28y.q6zv9nlkgfk7tokk
	I0917 00:52:36.412772 1116630 out.go:252]   - Configuring RBAC rules ...
	I0917 00:52:36.412895 1116630 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:52:36.413072 1116630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:52:36.413272 1116630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:52:36.413458 1116630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:52:36.413630 1116630 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:52:36.413752 1116630 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:52:36.413952 1116630 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:52:36.414015 1116630 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:52:36.414097 1116630 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:52:36.414107 1116630 kubeadm.go:310] 
	I0917 00:52:36.414229 1116630 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:52:36.414249 1116630 kubeadm.go:310] 
	I0917 00:52:36.414312 1116630 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:52:36.414319 1116630 kubeadm.go:310] 
	I0917 00:52:36.414354 1116630 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:52:36.414437 1116630 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:52:36.414479 1116630 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:52:36.414485 1116630 kubeadm.go:310] 
	I0917 00:52:36.414527 1116630 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:52:36.414533 1116630 kubeadm.go:310] 
	I0917 00:52:36.414570 1116630 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:52:36.414579 1116630 kubeadm.go:310] 
	I0917 00:52:36.414640 1116630 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:52:36.414758 1116630 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:52:36.414835 1116630 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:52:36.414845 1116630 kubeadm.go:310] 
	I0917 00:52:36.415005 1116630 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:52:36.415136 1116630 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:52:36.415157 1116630 kubeadm.go:310] 
	I0917 00:52:36.415264 1116630 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4uu28y.q6zv9nlkgfk7tokk \
	I0917 00:52:36.415393 1116630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0917 00:52:36.415422 1116630 kubeadm.go:310] 	--control-plane 
	I0917 00:52:36.415430 1116630 kubeadm.go:310] 
	I0917 00:52:36.415564 1116630 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:52:36.415580 1116630 kubeadm.go:310] 
	I0917 00:52:36.415684 1116630 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4uu28y.q6zv9nlkgfk7tokk \
	I0917 00:52:36.415826 1116630 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0917 00:52:36.415839 1116630 cni.go:84] Creating CNI manager for "bridge"
	I0917 00:52:36.417575 1116630 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 00:52:36.418863 1116630 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 00:52:36.430463 1116630 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 00:52:36.451554 1116630 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:52:36.451726 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:36.451847 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-656031 minikube.k8s.io/updated_at=2025_09_17T00_52_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=bridge-656031 minikube.k8s.io/primary=true
	I0917 00:52:36.463322 1116630 ops.go:34] apiserver oom_adj: -16
	I0917 00:52:36.541338 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:37.041931 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:37.542109 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:38.042194 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:38.541460 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:39.042077 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:39.542102 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:40.042297 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:40.542124 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:41.042127 1116630 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:52:41.118258 1116630 kubeadm.go:1105] duration metric: took 4.666588416s to wait for elevateKubeSystemPrivileges
	I0917 00:52:41.118301 1116630 kubeadm.go:394] duration metric: took 22.061631687s to StartCluster
	I0917 00:52:41.118323 1116630 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.118402 1116630 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:52:41.120102 1116630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.120391 1116630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:52:41.120402 1116630 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:52:41.120461 1116630 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:52:41.120581 1116630 addons.go:69] Setting storage-provisioner=true in profile "bridge-656031"
	I0917 00:52:41.120603 1116630 config.go:182] Loaded profile config "bridge-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:41.120610 1116630 addons.go:238] Setting addon storage-provisioner=true in "bridge-656031"
	I0917 00:52:41.120651 1116630 addons.go:69] Setting default-storageclass=true in profile "bridge-656031"
	I0917 00:52:41.120688 1116630 host.go:66] Checking if "bridge-656031" exists ...
	I0917 00:52:41.120703 1116630 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-656031"
	I0917 00:52:41.121207 1116630 cli_runner.go:164] Run: docker container inspect bridge-656031 --format={{.State.Status}}
	I0917 00:52:41.121265 1116630 cli_runner.go:164] Run: docker container inspect bridge-656031 --format={{.State.Status}}
	I0917 00:52:41.125292 1116630 out.go:179] * Verifying Kubernetes components...
	I0917 00:52:41.126823 1116630 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:41.147244 1116630 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:52:41.147379 1116630 addons.go:238] Setting addon default-storageclass=true in "bridge-656031"
	I0917 00:52:41.147431 1116630 host.go:66] Checking if "bridge-656031" exists ...
	I0917 00:52:41.148000 1116630 cli_runner.go:164] Run: docker container inspect bridge-656031 --format={{.State.Status}}
	I0917 00:52:41.152934 1116630 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:52:41.152963 1116630 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:52:41.153037 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:41.181453 1116630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa Username:docker}
	I0917 00:52:41.181656 1116630 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:52:41.181676 1116630 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:52:41.181739 1116630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-656031
	I0917 00:52:41.206723 1116630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/bridge-656031/id_rsa Username:docker}
	I0917 00:52:41.222610 1116630 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:52:41.259765 1116630 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:52:41.300845 1116630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:52:41.324439 1116630 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:52:41.466395 1116630 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0917 00:52:41.468115 1116630 node_ready.go:35] waiting up to 15m0s for node "bridge-656031" to be "Ready" ...
	I0917 00:52:41.478673 1116630 node_ready.go:49] node "bridge-656031" is "Ready"
	I0917 00:52:41.478729 1116630 node_ready.go:38] duration metric: took 10.5671ms for node "bridge-656031" to be "Ready" ...
	I0917 00:52:41.478751 1116630 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:52:41.478827 1116630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:52:41.664723 1116630 api_server.go:72] duration metric: took 544.289628ms to wait for apiserver process to appear ...
	I0917 00:52:41.664748 1116630 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:52:41.664769 1116630 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0917 00:52:41.672039 1116630 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0917 00:52:41.672858 1116630 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:52:41.673122 1116630 api_server.go:141] control plane version: v1.34.0
	I0917 00:52:41.673145 1116630 api_server.go:131] duration metric: took 8.389504ms to wait for apiserver health ...
	I0917 00:52:41.673153 1116630 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:52:41.674783 1116630 addons.go:514] duration metric: took 554.322164ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:52:41.677470 1116630 system_pods.go:59] 8 kube-system pods found
	I0917 00:52:41.677510 1116630 system_pods.go:61] "coredns-66bc5c9577-2khw2" [2d7a3b44-5310-459a-9df3-eea1a02f0cf8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:41.677522 1116630 system_pods.go:61] "coredns-66bc5c9577-rh9k8" [fe9d45c6-c1a8-40a1-b200-317fc0f58bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:41.677531 1116630 system_pods.go:61] "etcd-bridge-656031" [72828534-e659-485c-8a49-14137d4878ff] Running
	I0917 00:52:41.677540 1116630 system_pods.go:61] "kube-apiserver-bridge-656031" [2245d63b-9806-472f-a020-64147010d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:52:41.677546 1116630 system_pods.go:61] "kube-controller-manager-bridge-656031" [2eb1a08c-8237-4bc9-a9c9-593eaf5c2934] Running
	I0917 00:52:41.677555 1116630 system_pods.go:61] "kube-proxy-2x8jb" [66d194df-5760-4f8a-bff6-cedcf37b09c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:52:41.677568 1116630 system_pods.go:61] "kube-scheduler-bridge-656031" [14aa2801-6113-4cf9-b19c-4387b065e253] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:52:41.677577 1116630 system_pods.go:61] "storage-provisioner" [df1f959b-41b0-430a-87e2-5387c0281c5b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:52:41.677590 1116630 system_pods.go:74] duration metric: took 4.429072ms to wait for pod list to return data ...
	I0917 00:52:41.677598 1116630 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:52:41.680137 1116630 default_sa.go:45] found service account: "default"
	I0917 00:52:41.680160 1116630 default_sa.go:55] duration metric: took 2.553615ms for default service account to be created ...
	I0917 00:52:41.680171 1116630 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:52:41.682807 1116630 system_pods.go:86] 8 kube-system pods found
	I0917 00:52:41.682838 1116630 system_pods.go:89] "coredns-66bc5c9577-2khw2" [2d7a3b44-5310-459a-9df3-eea1a02f0cf8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:41.682849 1116630 system_pods.go:89] "coredns-66bc5c9577-rh9k8" [fe9d45c6-c1a8-40a1-b200-317fc0f58bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:41.682857 1116630 system_pods.go:89] "etcd-bridge-656031" [72828534-e659-485c-8a49-14137d4878ff] Running
	I0917 00:52:41.682869 1116630 system_pods.go:89] "kube-apiserver-bridge-656031" [2245d63b-9806-472f-a020-64147010d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:52:41.682881 1116630 system_pods.go:89] "kube-controller-manager-bridge-656031" [2eb1a08c-8237-4bc9-a9c9-593eaf5c2934] Running
	I0917 00:52:41.682890 1116630 system_pods.go:89] "kube-proxy-2x8jb" [66d194df-5760-4f8a-bff6-cedcf37b09c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:52:41.682900 1116630 system_pods.go:89] "kube-scheduler-bridge-656031" [14aa2801-6113-4cf9-b19c-4387b065e253] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:52:41.682920 1116630 system_pods.go:89] "storage-provisioner" [df1f959b-41b0-430a-87e2-5387c0281c5b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:52:41.682949 1116630 retry.go:31] will retry after 297.686705ms: missing components: kube-dns, kube-proxy
	I0917 00:52:41.970995 1116630 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-656031" context rescaled to 1 replicas
	I0917 00:52:41.988147 1116630 system_pods.go:86] 8 kube-system pods found
	I0917 00:52:41.988188 1116630 system_pods.go:89] "coredns-66bc5c9577-2khw2" [2d7a3b44-5310-459a-9df3-eea1a02f0cf8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:41.988198 1116630 system_pods.go:89] "coredns-66bc5c9577-rh9k8" [fe9d45c6-c1a8-40a1-b200-317fc0f58bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:41.988205 1116630 system_pods.go:89] "etcd-bridge-656031" [72828534-e659-485c-8a49-14137d4878ff] Running
	I0917 00:52:41.988213 1116630 system_pods.go:89] "kube-apiserver-bridge-656031" [2245d63b-9806-472f-a020-64147010d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:52:41.988217 1116630 system_pods.go:89] "kube-controller-manager-bridge-656031" [2eb1a08c-8237-4bc9-a9c9-593eaf5c2934] Running
	I0917 00:52:41.988225 1116630 system_pods.go:89] "kube-proxy-2x8jb" [66d194df-5760-4f8a-bff6-cedcf37b09c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:52:41.988233 1116630 system_pods.go:89] "kube-scheduler-bridge-656031" [14aa2801-6113-4cf9-b19c-4387b065e253] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:52:41.988242 1116630 system_pods.go:89] "storage-provisioner" [df1f959b-41b0-430a-87e2-5387c0281c5b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:52:41.988264 1116630 retry.go:31] will retry after 283.967491ms: missing components: kube-dns, kube-proxy
	I0917 00:52:42.276593 1116630 system_pods.go:86] 8 kube-system pods found
	I0917 00:52:42.276622 1116630 system_pods.go:89] "coredns-66bc5c9577-2khw2" [2d7a3b44-5310-459a-9df3-eea1a02f0cf8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:42.276631 1116630 system_pods.go:89] "coredns-66bc5c9577-rh9k8" [fe9d45c6-c1a8-40a1-b200-317fc0f58bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:42.276636 1116630 system_pods.go:89] "etcd-bridge-656031" [72828534-e659-485c-8a49-14137d4878ff] Running
	I0917 00:52:42.276642 1116630 system_pods.go:89] "kube-apiserver-bridge-656031" [2245d63b-9806-472f-a020-64147010d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:52:42.276646 1116630 system_pods.go:89] "kube-controller-manager-bridge-656031" [2eb1a08c-8237-4bc9-a9c9-593eaf5c2934] Running
	I0917 00:52:42.276652 1116630 system_pods.go:89] "kube-proxy-2x8jb" [66d194df-5760-4f8a-bff6-cedcf37b09c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:52:42.276657 1116630 system_pods.go:89] "kube-scheduler-bridge-656031" [14aa2801-6113-4cf9-b19c-4387b065e253] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:52:42.276662 1116630 system_pods.go:89] "storage-provisioner" [df1f959b-41b0-430a-87e2-5387c0281c5b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:52:42.276677 1116630 retry.go:31] will retry after 422.832461ms: missing components: kube-dns, kube-proxy
	I0917 00:52:42.703512 1116630 system_pods.go:86] 8 kube-system pods found
	I0917 00:52:42.703549 1116630 system_pods.go:89] "coredns-66bc5c9577-2khw2" [2d7a3b44-5310-459a-9df3-eea1a02f0cf8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:42.703557 1116630 system_pods.go:89] "coredns-66bc5c9577-rh9k8" [fe9d45c6-c1a8-40a1-b200-317fc0f58bd0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:42.703562 1116630 system_pods.go:89] "etcd-bridge-656031" [72828534-e659-485c-8a49-14137d4878ff] Running
	I0917 00:52:42.703568 1116630 system_pods.go:89] "kube-apiserver-bridge-656031" [2245d63b-9806-472f-a020-64147010d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:52:42.703573 1116630 system_pods.go:89] "kube-controller-manager-bridge-656031" [2eb1a08c-8237-4bc9-a9c9-593eaf5c2934] Running
	I0917 00:52:42.703578 1116630 system_pods.go:89] "kube-proxy-2x8jb" [66d194df-5760-4f8a-bff6-cedcf37b09c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:52:42.703585 1116630 system_pods.go:89] "kube-scheduler-bridge-656031" [14aa2801-6113-4cf9-b19c-4387b065e253] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:52:42.703594 1116630 system_pods.go:89] "storage-provisioner" [df1f959b-41b0-430a-87e2-5387c0281c5b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:52:42.703615 1116630 retry.go:31] will retry after 369.766148ms: missing components: kube-dns, kube-proxy
	I0917 00:52:43.077797 1116630 system_pods.go:86] 7 kube-system pods found
	I0917 00:52:43.077829 1116630 system_pods.go:89] "coredns-66bc5c9577-rh9k8" [fe9d45c6-c1a8-40a1-b200-317fc0f58bd0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:52:43.077836 1116630 system_pods.go:89] "etcd-bridge-656031" [72828534-e659-485c-8a49-14137d4878ff] Running
	I0917 00:52:43.077841 1116630 system_pods.go:89] "kube-apiserver-bridge-656031" [2245d63b-9806-472f-a020-64147010d216] Running
	I0917 00:52:43.077845 1116630 system_pods.go:89] "kube-controller-manager-bridge-656031" [2eb1a08c-8237-4bc9-a9c9-593eaf5c2934] Running
	I0917 00:52:43.077850 1116630 system_pods.go:89] "kube-proxy-2x8jb" [66d194df-5760-4f8a-bff6-cedcf37b09c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:52:43.077855 1116630 system_pods.go:89] "kube-scheduler-bridge-656031" [14aa2801-6113-4cf9-b19c-4387b065e253] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:52:43.077862 1116630 system_pods.go:89] "storage-provisioner" [df1f959b-41b0-430a-87e2-5387c0281c5b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:52:43.077871 1116630 system_pods.go:126] duration metric: took 1.397692464s to wait for k8s-apps to be running ...
	I0917 00:52:43.077879 1116630 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:52:43.077956 1116630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:52:43.091442 1116630 system_svc.go:56] duration metric: took 13.540219ms WaitForService to wait for kubelet
	I0917 00:52:43.091480 1116630 kubeadm.go:578] duration metric: took 1.97105232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:52:43.091505 1116630 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:52:43.094459 1116630 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:52:43.094491 1116630 node_conditions.go:123] node cpu capacity is 8
	I0917 00:52:43.094507 1116630 node_conditions.go:105] duration metric: took 2.997024ms to run NodePressure ...
	I0917 00:52:43.094526 1116630 start.go:241] waiting for startup goroutines ...
	I0917 00:52:43.094541 1116630 start.go:246] waiting for cluster config update ...
	I0917 00:52:43.094554 1116630 start.go:255] writing updated cluster config ...
	I0917 00:52:43.094858 1116630 ssh_runner.go:195] Run: rm -f paused
	I0917 00:52:43.099058 1116630 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:52:43.102654 1116630 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rh9k8" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 00:52:45.108401 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:52:47.608372 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:52:50.108028 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:52:52.608843 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:52:55.109200 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:52:57.109417 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:52:59.609451 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:02.108207 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:04.108739 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:06.111296 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:08.608576 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:10.612215 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:13.111165 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:15.609853 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:18.109937 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:20.612837 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:23.108557 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:25.109292 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:27.608605 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:29.609108 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:32.108665 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:34.109610 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:36.115738 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:38.609036 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:41.109978 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:43.110316 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:45.608118 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:47.608461 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:49.609107 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:51.609863 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:54.110718 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:56.608586 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:53:59.108840 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:01.111248 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:03.609223 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:05.609393 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:08.108260 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:10.108824 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:12.609129 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:15.108641 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:17.608242 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:19.608305 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:21.608955 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:24.108484 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:26.609576 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:28.609633 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:30.609889 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:33.108643 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:35.109380 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:37.608286 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:39.608572 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:41.609028 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:44.109453 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:46.608637 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:48.608744 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:51.109039 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:53.608097 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:55.608707 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:54:58.108763 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:00.108826 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:02.109939 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:04.110002 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:06.608427 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:08.609044 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:11.108804 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:13.610020 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:16.108731 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:18.609883 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:21.109560 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:23.608467 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:25.608782 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:27.608867 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:30.108401 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:32.108986 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:34.609549 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:37.108393 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:39.108460 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:41.608381 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:43.609103 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:46.108771 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:48.608452 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:50.608600 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:52.608775 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:55.108550 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:57.608295 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:55:59.609186 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:02.109617 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:04.608285 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:06.609118 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:09.107633 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:11.108790 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:13.608162 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:15.609294 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:18.108492 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:20.109027 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:22.608168 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:24.608792 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:26.609029 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:29.108758 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:31.109011 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:33.609817 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:36.110365 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:38.607855 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:40.608271 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	W0917 00:56:42.608423 1116630 pod_ready.go:104] pod "coredns-66bc5c9577-rh9k8" is not "Ready", error: <nil>
	I0917 00:56:43.100023 1116630 pod_ready.go:86] duration metric: took 3m59.997300948s for pod "coredns-66bc5c9577-rh9k8" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 00:56:43.100064 1116630 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0917 00:56:43.100083 1116630 pod_ready.go:40] duration metric: took 4m0.000985463s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:56:43.102051 1116630 out.go:203] 
	W0917 00:56:43.103344 1116630 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0917 00:56:43.104427 1116630 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (276.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (290.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubenet-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: exit status 80 (4m50.916181874s)

                                                
                                                
-- stdout --
	* [kubenet-656031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "kubenet-656031" primary control-plane node in "kubenet-656031" cluster
	* Pulling base image v0.0.48 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:52:30.493960 1126789 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:52:30.494073 1126789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:52:30.494078 1126789 out.go:374] Setting ErrFile to fd 2...
	I0917 00:52:30.494082 1126789 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:52:30.494360 1126789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:52:30.494877 1126789 out.go:368] Setting JSON to false
	I0917 00:52:30.496309 1126789 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12883,"bootTime":1758057468,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:52:30.496410 1126789 start.go:140] virtualization: kvm guest
	I0917 00:52:30.498796 1126789 out.go:179] * [kubenet-656031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:52:30.500198 1126789 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:52:30.500202 1126789 notify.go:220] Checking for updates...
	I0917 00:52:30.504631 1126789 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:52:30.506199 1126789 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:52:30.507933 1126789 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:52:30.509653 1126789 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:52:30.510886 1126789 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:52:30.512959 1126789 config.go:182] Loaded profile config "bridge-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:30.513372 1126789 config.go:182] Loaded profile config "enable-default-cni-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:30.513505 1126789 config.go:182] Loaded profile config "kindnet-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:30.513659 1126789 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:52:30.543043 1126789 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:52:30.543154 1126789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:52:30.609313 1126789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:52:30.597661636 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:52:30.609458 1126789 docker.go:318] overlay module found
	I0917 00:52:30.611765 1126789 out.go:179] * Using the docker driver based on user configuration
	I0917 00:52:30.613017 1126789 start.go:304] selected driver: docker
	I0917 00:52:30.613039 1126789 start.go:918] validating driver "docker" against <nil>
	I0917 00:52:30.613055 1126789 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:52:30.613783 1126789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:52:30.683284 1126789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-17 00:52:30.670487199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:52:30.683437 1126789 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:52:30.683653 1126789 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:52:30.685385 1126789 out.go:179] * Using Docker driver with root privileges
	I0917 00:52:30.686926 1126789 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0917 00:52:30.687070 1126789 start.go:348] cluster config:
	{Name:kubenet-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0917 00:52:30.688648 1126789 out.go:179] * Starting "kubenet-656031" primary control-plane node in "kubenet-656031" cluster
	I0917 00:52:30.689953 1126789 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:52:30.691282 1126789 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:52:30.692681 1126789 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:52:30.692744 1126789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:52:30.692760 1126789 cache.go:58] Caching tarball of preloaded images
	I0917 00:52:30.692797 1126789 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:52:30.692868 1126789 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:52:30.692887 1126789 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:52:30.693034 1126789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/config.json ...
	I0917 00:52:30.693057 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/config.json: {Name:mkd5e3b7309c38c1291aad8c0fc00b8710cccfd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:30.720317 1126789 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:52:30.720342 1126789 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:52:30.720360 1126789 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:52:30.720390 1126789 start.go:360] acquireMachinesLock for kubenet-656031: {Name:mk7819691e9a97c8b3076dc3a18c3a9c10368518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:52:30.720517 1126789 start.go:364] duration metric: took 103.831µs to acquireMachinesLock for "kubenet-656031"
	I0917 00:52:30.720558 1126789 start.go:93] Provisioning new machine with config: &{Name:kubenet-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:52:30.720668 1126789 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:52:30.723951 1126789 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:52:30.724259 1126789 start.go:159] libmachine.API.Create for "kubenet-656031" (driver="docker")
	I0917 00:52:30.724298 1126789 client.go:168] LocalClient.Create starting
	I0917 00:52:30.724389 1126789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0917 00:52:30.724429 1126789 main.go:141] libmachine: Decoding PEM data...
	I0917 00:52:30.724448 1126789 main.go:141] libmachine: Parsing certificate...
	I0917 00:52:30.724523 1126789 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0917 00:52:30.724554 1126789 main.go:141] libmachine: Decoding PEM data...
	I0917 00:52:30.724570 1126789 main.go:141] libmachine: Parsing certificate...
	I0917 00:52:30.725066 1126789 cli_runner.go:164] Run: docker network inspect kubenet-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:52:30.745852 1126789 cli_runner.go:211] docker network inspect kubenet-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:52:30.745962 1126789 network_create.go:284] running [docker network inspect kubenet-656031] to gather additional debugging logs...
	I0917 00:52:30.745989 1126789 cli_runner.go:164] Run: docker network inspect kubenet-656031
	W0917 00:52:30.765431 1126789 cli_runner.go:211] docker network inspect kubenet-656031 returned with exit code 1
	I0917 00:52:30.765471 1126789 network_create.go:287] error running [docker network inspect kubenet-656031]: docker network inspect kubenet-656031: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-656031 not found
	I0917 00:52:30.765486 1126789 network_create.go:289] output of [docker network inspect kubenet-656031]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-656031 not found
	
	** /stderr **
	I0917 00:52:30.765643 1126789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:52:30.786391 1126789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab651df73000 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:63:f8:73:0d:ee} reservation:<nil>}
	I0917 00:52:30.786966 1126789 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-91db5a27742d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:6c:9c:db:5a:d4} reservation:<nil>}
	I0917 00:52:30.787730 1126789 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0515bd298a94 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:91:5b:dc:7a:d8} reservation:<nil>}
	I0917 00:52:30.788804 1126789 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001deba90}
	I0917 00:52:30.788834 1126789 network_create.go:124] attempt to create docker network kubenet-656031 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0917 00:52:30.788885 1126789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-656031 kubenet-656031
	I0917 00:52:30.863467 1126789 network_create.go:108] docker network kubenet-656031 192.168.76.0/24 created
	I0917 00:52:30.863508 1126789 kic.go:121] calculated static IP "192.168.76.2" for the "kubenet-656031" container
	I0917 00:52:30.863598 1126789 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:52:30.883413 1126789 cli_runner.go:164] Run: docker volume create kubenet-656031 --label name.minikube.sigs.k8s.io=kubenet-656031 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:52:30.908030 1126789 oci.go:103] Successfully created a docker volume kubenet-656031
	I0917 00:52:30.908120 1126789 cli_runner.go:164] Run: docker run --rm --name kubenet-656031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-656031 --entrypoint /usr/bin/test -v kubenet-656031:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:52:31.384228 1126789 oci.go:107] Successfully prepared a docker volume kubenet-656031
	I0917 00:52:31.384291 1126789 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:52:31.384317 1126789 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:52:31.384399 1126789 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-656031:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:52:34.007045 1126789 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-656031:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.622583349s)
	I0917 00:52:34.007114 1126789 kic.go:203] duration metric: took 2.622764414s to extract preloaded images to volume ...
	W0917 00:52:34.007253 1126789 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:52:34.007329 1126789 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:52:34.007399 1126789 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:52:34.093619 1126789 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-656031 --name kubenet-656031 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-656031 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-656031 --network kubenet-656031 --ip 192.168.76.2 --volume kubenet-656031:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:52:34.411193 1126789 cli_runner.go:164] Run: docker container inspect kubenet-656031 --format={{.State.Running}}
	I0917 00:52:34.436708 1126789 cli_runner.go:164] Run: docker container inspect kubenet-656031 --format={{.State.Status}}
	I0917 00:52:34.459081 1126789 cli_runner.go:164] Run: docker exec kubenet-656031 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:52:34.516802 1126789 oci.go:144] the created container "kubenet-656031" has a running status.
	I0917 00:52:34.516848 1126789 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa...
	I0917 00:52:34.928168 1126789 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:52:34.954835 1126789 cli_runner.go:164] Run: docker container inspect kubenet-656031 --format={{.State.Status}}
	I0917 00:52:34.972703 1126789 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:52:34.972726 1126789 kic_runner.go:114] Args: [docker exec --privileged kubenet-656031 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:52:35.024473 1126789 cli_runner.go:164] Run: docker container inspect kubenet-656031 --format={{.State.Status}}
	I0917 00:52:35.047023 1126789 machine.go:93] provisionDockerMachine start ...
	I0917 00:52:35.047131 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:35.069538 1126789 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:35.069850 1126789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I0917 00:52:35.069874 1126789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:52:35.210430 1126789 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-656031
	
	I0917 00:52:35.210460 1126789 ubuntu.go:182] provisioning hostname "kubenet-656031"
	I0917 00:52:35.210534 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:35.228735 1126789 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:35.229016 1126789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I0917 00:52:35.229034 1126789 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubenet-656031 && echo "kubenet-656031" | sudo tee /etc/hostname
	I0917 00:52:35.386337 1126789 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-656031
	
	I0917 00:52:35.386430 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:35.410182 1126789 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:35.410584 1126789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I0917 00:52:35.410611 1126789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-656031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-656031/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-656031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:52:35.562726 1126789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:52:35.562757 1126789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:52:35.562780 1126789 ubuntu.go:190] setting up certificates
	I0917 00:52:35.562791 1126789 provision.go:84] configureAuth start
	I0917 00:52:35.562842 1126789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-656031
	I0917 00:52:35.582192 1126789 provision.go:143] copyHostCerts
	I0917 00:52:35.582263 1126789 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:52:35.582276 1126789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:52:35.582347 1126789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:52:35.582466 1126789 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:52:35.582477 1126789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:52:35.582508 1126789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:52:35.582569 1126789 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:52:35.582578 1126789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:52:35.582610 1126789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:52:35.582679 1126789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.kubenet-656031 san=[127.0.0.1 192.168.76.2 kubenet-656031 localhost minikube]
	I0917 00:52:35.806450 1126789 provision.go:177] copyRemoteCerts
	I0917 00:52:35.806526 1126789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:52:35.806575 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:35.826941 1126789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa Username:docker}
	I0917 00:52:35.927435 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:52:35.957359 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 00:52:35.983434 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:52:36.011384 1126789 provision.go:87] duration metric: took 448.576937ms to configureAuth
	I0917 00:52:36.011420 1126789 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:52:36.011600 1126789 config.go:182] Loaded profile config "kubenet-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:52:36.011651 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:36.030810 1126789 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:36.031095 1126789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I0917 00:52:36.031113 1126789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:52:36.172854 1126789 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:52:36.172883 1126789 ubuntu.go:71] root file system type: overlay
	I0917 00:52:36.173073 1126789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:52:36.173144 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:36.191601 1126789 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:36.191932 1126789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I0917 00:52:36.192053 1126789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:52:36.347768 1126789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:52:36.347858 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:36.366971 1126789 main.go:141] libmachine: Using SSH client type: native
	I0917 00:52:36.367184 1126789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33151 <nil> <nil>}
	I0917 00:52:36.367206 1126789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:52:37.619326 1126789 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-17 00:52:36.345457924 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 00:52:37.619368 1126789 machine.go:96] duration metric: took 2.572318455s to provisionDockerMachine
	I0917 00:52:37.619382 1126789 client.go:171] duration metric: took 6.895078691s to LocalClient.Create
	I0917 00:52:37.619404 1126789 start.go:167] duration metric: took 6.895148692s to libmachine.API.Create "kubenet-656031"
	I0917 00:52:37.619413 1126789 start.go:293] postStartSetup for "kubenet-656031" (driver="docker")
	I0917 00:52:37.619422 1126789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:52:37.619491 1126789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:52:37.619533 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:37.637962 1126789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa Username:docker}
	I0917 00:52:37.740088 1126789 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:52:37.743698 1126789 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:52:37.743737 1126789 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:52:37.743748 1126789 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:52:37.743755 1126789 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:52:37.743767 1126789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:52:37.743819 1126789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:52:37.743951 1126789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:52:37.744067 1126789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:52:37.753760 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:52:37.783949 1126789 start.go:296] duration metric: took 164.519373ms for postStartSetup
	I0917 00:52:37.784332 1126789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-656031
	I0917 00:52:37.802773 1126789 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/config.json ...
	I0917 00:52:37.803146 1126789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:52:37.803210 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:37.821533 1126789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa Username:docker}
	I0917 00:52:37.915609 1126789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:52:37.920205 1126789 start.go:128] duration metric: took 7.199519857s to createHost
	I0917 00:52:37.920237 1126789 start.go:83] releasing machines lock for "kubenet-656031", held for 7.199706105s
	I0917 00:52:37.920297 1126789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-656031
	I0917 00:52:37.938821 1126789 ssh_runner.go:195] Run: cat /version.json
	I0917 00:52:37.938891 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:37.938935 1126789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:52:37.939013 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:52:37.959580 1126789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa Username:docker}
	I0917 00:52:37.960623 1126789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa Username:docker}
	I0917 00:52:38.051859 1126789 ssh_runner.go:195] Run: systemctl --version
	I0917 00:52:38.142919 1126789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:52:38.149171 1126789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:52:38.186643 1126789 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:52:38.186732 1126789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:52:38.222413 1126789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:52:38.222449 1126789 start.go:495] detecting cgroup driver to use...
	I0917 00:52:38.222504 1126789 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:52:38.222646 1126789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:52:38.246338 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:52:38.262689 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:52:38.277266 1126789 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:52:38.277365 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:52:38.289979 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:52:38.300688 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:52:38.312991 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:52:38.324590 1126789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:52:38.336005 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:52:38.347381 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:52:38.358154 1126789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:52:38.368559 1126789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:52:38.380054 1126789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:52:38.392580 1126789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:38.475667 1126789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:52:38.558329 1126789 start.go:495] detecting cgroup driver to use...
	I0917 00:52:38.558386 1126789 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:52:38.558447 1126789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:52:38.573379 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:52:38.587374 1126789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:52:38.605719 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:52:38.620252 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:52:38.634167 1126789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:52:38.652980 1126789 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:52:38.657089 1126789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:52:38.669286 1126789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I0917 00:52:38.691451 1126789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:52:38.766128 1126789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:52:38.836806 1126789 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:52:38.836970 1126789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:52:38.856780 1126789 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:52:38.869129 1126789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:38.939187 1126789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:52:39.731373 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:52:39.744428 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:52:39.757828 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:52:39.770147 1126789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:52:39.845977 1126789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:52:39.918296 1126789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:39.988792 1126789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:52:40.018539 1126789 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:52:40.032576 1126789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:40.110030 1126789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:52:40.193508 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:52:40.206306 1126789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:52:40.206370 1126789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:52:40.210322 1126789 start.go:563] Will wait 60s for crictl version
	I0917 00:52:40.210378 1126789 ssh_runner.go:195] Run: which crictl
	I0917 00:52:40.213739 1126789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:52:40.248358 1126789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:52:40.248416 1126789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:52:40.274621 1126789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:52:40.319059 1126789 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:52:40.319164 1126789 cli_runner.go:164] Run: docker network inspect kubenet-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:52:40.338619 1126789 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0917 00:52:40.342964 1126789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:52:40.356298 1126789 kubeadm.go:875] updating cluster {Name:kubenet-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:52:40.356417 1126789 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:52:40.356461 1126789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:52:40.380112 1126789 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:52:40.380139 1126789 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:52:40.380224 1126789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:52:40.407616 1126789 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:52:40.407736 1126789 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:52:40.407761 1126789 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 docker true true} ...
	I0917 00:52:40.407902 1126789 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-656031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kubenet-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:52:40.408003 1126789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:52:40.467493 1126789 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0917 00:52:40.467518 1126789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:52:40.467541 1126789 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-656031 NodeName:kubenet-656031 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:52:40.467691 1126789 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-656031"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:52:40.467754 1126789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:52:40.478623 1126789 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:52:40.478697 1126789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:52:40.488824 1126789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I0917 00:52:40.509060 1126789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:52:40.530458 1126789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0917 00:52:40.550954 1126789 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:52:40.555055 1126789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:52:40.567558 1126789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:52:40.641509 1126789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:52:40.664612 1126789 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031 for IP: 192.168.76.2
	I0917 00:52:40.664640 1126789 certs.go:194] generating shared ca certs ...
	I0917 00:52:40.664663 1126789 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:40.664846 1126789 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:52:40.664901 1126789 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:52:40.664951 1126789 certs.go:256] generating profile certs ...
	I0917 00:52:40.665039 1126789 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/client.key
	I0917 00:52:40.665058 1126789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/client.crt with IP's: []
	I0917 00:52:41.145223 1126789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/client.crt ...
	I0917 00:52:41.145264 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/client.crt: {Name:mkc21b2fca5c97f79c54c86e2926443e8c86a4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.145493 1126789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/client.key ...
	I0917 00:52:41.145519 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/client.key: {Name:mk46c283418ad491e38cccb8dc251633a148219a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.145672 1126789 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.key.a867c337
	I0917 00:52:41.145701 1126789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.crt.a867c337 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0917 00:52:41.206238 1126789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.crt.a867c337 ...
	I0917 00:52:41.206275 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.crt.a867c337: {Name:mkb627e465b08c6ca2c2f3c82b8287107b438f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.206478 1126789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.key.a867c337 ...
	I0917 00:52:41.206500 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.key.a867c337: {Name:mk15c65d7856c85cc9629f1dbc2ab8fe7fc4d45a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.206629 1126789 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.crt.a867c337 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.crt
	I0917 00:52:41.206736 1126789 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.key.a867c337 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.key
	I0917 00:52:41.206801 1126789 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.key
	I0917 00:52:41.206818 1126789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.crt with IP's: []
	I0917 00:52:41.651435 1126789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.crt ...
	I0917 00:52:41.651471 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.crt: {Name:mkbfd162909fc3b2cfd0ea5cb2ade896baa7b217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.651697 1126789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.key ...
	I0917 00:52:41.651722 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.key: {Name:mk7ec944627e2cd9316c1dfa1c7eaa01930a60a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:52:41.652052 1126789 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:52:41.652119 1126789 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:52:41.652133 1126789 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:52:41.652168 1126789 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:52:41.652198 1126789 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:52:41.652233 1126789 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:52:41.652296 1126789 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:52:41.653288 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:52:41.687755 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:52:41.717729 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:52:41.748902 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:52:41.778457 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 00:52:41.807881 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:52:41.838724 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:52:41.872066 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubenet-656031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:52:41.903125 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:52:41.941738 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:52:41.973257 1126789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:52:42.008765 1126789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:52:42.035757 1126789 ssh_runner.go:195] Run: openssl version
	I0917 00:52:42.044063 1126789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:52:42.059939 1126789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:52:42.065289 1126789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:52:42.065428 1126789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:52:42.074327 1126789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:52:42.085768 1126789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:52:42.099470 1126789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:52:42.105089 1126789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:52:42.105165 1126789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:52:42.116894 1126789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:52:42.132259 1126789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:52:42.146386 1126789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:52:42.151493 1126789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:52:42.151566 1126789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:52:42.160272 1126789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:52:42.172772 1126789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:52:42.177723 1126789 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:52:42.177789 1126789 kubeadm.go:392] StartCluster: {Name:kubenet-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:52:42.177971 1126789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:52:42.201945 1126789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:52:42.214195 1126789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:52:42.225968 1126789 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:52:42.226030 1126789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:52:42.236984 1126789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:52:42.237007 1126789 kubeadm.go:157] found existing configuration files:
	
	I0917 00:52:42.237051 1126789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:52:42.248413 1126789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:52:42.248491 1126789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:52:42.259281 1126789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:52:42.271289 1126789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:52:42.271369 1126789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:52:42.281781 1126789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:52:42.292838 1126789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:52:42.292917 1126789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:52:42.303346 1126789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:52:42.316850 1126789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:52:42.317006 1126789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:52:42.330037 1126789 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:52:42.369549 1126789 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:52:42.369633 1126789 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:52:42.389003 1126789 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:52:42.389079 1126789 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 00:52:42.389112 1126789 kubeadm.go:310] OS: Linux
	I0917 00:52:42.389152 1126789 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:52:42.389207 1126789 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:52:42.389257 1126789 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:52:42.389317 1126789 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:52:42.389370 1126789 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:52:42.389411 1126789 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:52:42.389479 1126789 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:52:42.389518 1126789 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 00:52:42.446394 1126789 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:52:42.446541 1126789 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:52:42.446666 1126789 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:52:42.461701 1126789 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:52:42.464843 1126789 out.go:252]   - Generating certificates and keys ...
	I0917 00:52:42.464984 1126789 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:52:42.465091 1126789 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:52:43.552689 1126789 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:52:43.654506 1126789 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:52:43.933741 1126789 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:52:44.530292 1126789 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:52:44.886964 1126789 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:52:44.887200 1126789 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubenet-656031 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0917 00:52:45.194529 1126789 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:52:45.194717 1126789 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubenet-656031 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0917 00:52:45.826952 1126789 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:52:46.031145 1126789 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:52:46.315360 1126789 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:52:46.315540 1126789 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:52:46.425779 1126789 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:52:46.961127 1126789 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:52:47.132403 1126789 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:52:47.261671 1126789 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:52:47.646445 1126789 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:52:47.647158 1126789 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:52:47.651769 1126789 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:52:47.653177 1126789 out.go:252]   - Booting up control plane ...
	I0917 00:52:47.653327 1126789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:52:47.653445 1126789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:52:47.654001 1126789 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:52:47.664565 1126789 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:52:47.664707 1126789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:52:47.670650 1126789 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:52:47.670922 1126789 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:52:47.670985 1126789 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:52:47.762340 1126789 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:52:47.762519 1126789 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:53:09.264449 1126789 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 21.502112289s
	I0917 00:53:09.269143 1126789 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:53:09.269270 1126789 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0917 00:53:09.269389 1126789 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:53:09.269518 1126789 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:53:11.703962 1126789 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.43482818s
	I0917 00:53:11.842450 1126789 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.573302095s
	I0917 00:53:13.270834 1126789 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001732703s
	I0917 00:53:13.284220 1126789 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:53:13.296574 1126789 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:53:13.306694 1126789 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:53:13.307045 1126789 kubeadm.go:310] [mark-control-plane] Marking the node kubenet-656031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:53:13.316086 1126789 kubeadm.go:310] [bootstrap-token] Using token: y2m7x1.rriylz5igrb0rsn7
	I0917 00:53:13.317553 1126789 out.go:252]   - Configuring RBAC rules ...
	I0917 00:53:13.317745 1126789 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:53:13.322820 1126789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:53:13.328784 1126789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:53:13.331708 1126789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:53:13.335488 1126789 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:53:13.338495 1126789 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:53:13.771139 1126789 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:53:14.699662 1126789 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:53:15.546232 1126789 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:53:15.547745 1126789 kubeadm.go:310] 
	I0917 00:53:15.547868 1126789 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:53:15.547891 1126789 kubeadm.go:310] 
	I0917 00:53:15.548029 1126789 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:53:15.548041 1126789 kubeadm.go:310] 
	I0917 00:53:15.548070 1126789 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:53:15.548136 1126789 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:53:15.548192 1126789 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:53:15.548214 1126789 kubeadm.go:310] 
	I0917 00:53:15.548278 1126789 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:53:15.548284 1126789 kubeadm.go:310] 
	I0917 00:53:15.548337 1126789 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:53:15.548342 1126789 kubeadm.go:310] 
	I0917 00:53:15.548402 1126789 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:53:15.548490 1126789 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:53:15.548569 1126789 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:53:15.548574 1126789 kubeadm.go:310] 
	I0917 00:53:15.548669 1126789 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:53:15.548760 1126789 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:53:15.548766 1126789 kubeadm.go:310] 
	I0917 00:53:15.548866 1126789 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y2m7x1.rriylz5igrb0rsn7 \
	I0917 00:53:15.549027 1126789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0917 00:53:15.549055 1126789 kubeadm.go:310] 	--control-plane 
	I0917 00:53:15.549060 1126789 kubeadm.go:310] 
	I0917 00:53:15.549165 1126789 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:53:15.549172 1126789 kubeadm.go:310] 
	I0917 00:53:15.549275 1126789 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y2m7x1.rriylz5igrb0rsn7 \
	I0917 00:53:15.549391 1126789 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0917 00:53:15.553711 1126789 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 00:53:15.553863 1126789 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:53:15.553938 1126789 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0917 00:53:15.553974 1126789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:53:15.554070 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:15.554203 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-656031 minikube.k8s.io/updated_at=2025_09_17T00_53_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=kubenet-656031 minikube.k8s.io/primary=true
	I0917 00:53:15.664060 1126789 ops.go:34] apiserver oom_adj: -16
	I0917 00:53:15.664188 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:16.164599 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:16.665138 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:17.164336 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:17.664942 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:18.165104 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:18.664948 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:19.165092 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:19.664542 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:20.165037 1126789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:20.259457 1126789 kubeadm.go:1105] duration metric: took 4.705447529s to wait for elevateKubeSystemPrivileges
	I0917 00:53:20.259501 1126789 kubeadm.go:394] duration metric: took 38.081717736s to StartCluster
	I0917 00:53:20.259525 1126789 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:20.259603 1126789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:53:20.261706 1126789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:20.262075 1126789 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:53:20.262295 1126789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:53:20.262298 1126789 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:53:20.262412 1126789 addons.go:69] Setting storage-provisioner=true in profile "kubenet-656031"
	I0917 00:53:20.262436 1126789 addons.go:238] Setting addon storage-provisioner=true in "kubenet-656031"
	I0917 00:53:20.262443 1126789 addons.go:69] Setting default-storageclass=true in profile "kubenet-656031"
	I0917 00:53:20.262478 1126789 host.go:66] Checking if "kubenet-656031" exists ...
	I0917 00:53:20.262486 1126789 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-656031"
	I0917 00:53:20.262541 1126789 config.go:182] Loaded profile config "kubenet-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:53:20.262879 1126789 cli_runner.go:164] Run: docker container inspect kubenet-656031 --format={{.State.Status}}
	I0917 00:53:20.263050 1126789 cli_runner.go:164] Run: docker container inspect kubenet-656031 --format={{.State.Status}}
	I0917 00:53:20.268582 1126789 out.go:179] * Verifying Kubernetes components...
	I0917 00:53:20.274736 1126789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:53:20.297411 1126789 addons.go:238] Setting addon default-storageclass=true in "kubenet-656031"
	I0917 00:53:20.297467 1126789 host.go:66] Checking if "kubenet-656031" exists ...
	I0917 00:53:20.297959 1126789 cli_runner.go:164] Run: docker container inspect kubenet-656031 --format={{.State.Status}}
	I0917 00:53:20.302435 1126789 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:53:20.304299 1126789 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:53:20.304369 1126789 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:53:20.304461 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:53:20.330827 1126789 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:53:20.330887 1126789 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:53:20.331082 1126789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-656031
	I0917 00:53:20.343939 1126789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa Username:docker}
	I0917 00:53:20.364382 1126789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33151 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/kubenet-656031/id_rsa Username:docker}
	I0917 00:53:20.406725 1126789 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:53:20.481421 1126789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:53:20.503169 1126789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:53:20.532183 1126789 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:53:20.690355 1126789 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0917 00:53:20.690420 1126789 node_ready.go:35] waiting up to 15m0s for node "kubenet-656031" to be "Ready" ...
	I0917 00:53:20.703297 1126789 node_ready.go:49] node "kubenet-656031" is "Ready"
	I0917 00:53:20.703403 1126789 node_ready.go:38] duration metric: took 12.958013ms for node "kubenet-656031" to be "Ready" ...
	I0917 00:53:20.703459 1126789 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:53:20.703548 1126789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:53:20.988164 1126789 api_server.go:72] duration metric: took 726.043673ms to wait for apiserver process to appear ...
	I0917 00:53:20.988193 1126789 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:53:20.988213 1126789 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0917 00:53:20.995298 1126789 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0917 00:53:20.997519 1126789 api_server.go:141] control plane version: v1.34.0
	I0917 00:53:20.997552 1126789 api_server.go:131] duration metric: took 9.351025ms to wait for apiserver health ...
	I0917 00:53:20.997564 1126789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:53:20.999720 1126789 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:53:21.002067 1126789 system_pods.go:59] 8 kube-system pods found
	I0917 00:53:21.002106 1126789 system_pods.go:61] "coredns-66bc5c9577-4l97p" [d3cf8261-8090-4f73-8e3f-15ee6d02602d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:21.002117 1126789 system_pods.go:61] "coredns-66bc5c9577-5mbks" [372bfaef-b7de-4f37-beb3-288facd952aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:21.002127 1126789 system_pods.go:61] "etcd-kubenet-656031" [a7f1f343-ff07-4a56-ad34-f993fcf3cc19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:53:21.002136 1126789 system_pods.go:61] "kube-apiserver-kubenet-656031" [b50a4678-04b5-4320-91c0-38b5276ecffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:53:21.002146 1126789 system_pods.go:61] "kube-controller-manager-kubenet-656031" [505e77b8-c940-4d92-8b82-113872809e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:53:21.002155 1126789 system_pods.go:61] "kube-proxy-zsf2c" [3e2c15dd-6f6f-4423-8ba7-7ac5e0f85b70] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:53:21.002162 1126789 system_pods.go:61] "kube-scheduler-kubenet-656031" [e0e6197e-2e26-4faa-9fbb-2f3b691ef3fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:53:21.002168 1126789 system_pods.go:61] "storage-provisioner" [c1a84098-378d-44ba-8604-694f058991b3] Pending
	I0917 00:53:21.002178 1126789 system_pods.go:74] duration metric: took 4.604202ms to wait for pod list to return data ...
	I0917 00:53:21.002188 1126789 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:53:21.002753 1126789 addons.go:514] duration metric: took 740.45659ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:53:21.006229 1126789 default_sa.go:45] found service account: "default"
	I0917 00:53:21.006249 1126789 default_sa.go:55] duration metric: took 4.055135ms for default service account to be created ...
	I0917 00:53:21.006258 1126789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:53:21.009815 1126789 system_pods.go:86] 8 kube-system pods found
	I0917 00:53:21.009854 1126789 system_pods.go:89] "coredns-66bc5c9577-4l97p" [d3cf8261-8090-4f73-8e3f-15ee6d02602d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:21.009866 1126789 system_pods.go:89] "coredns-66bc5c9577-5mbks" [372bfaef-b7de-4f37-beb3-288facd952aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:21.009884 1126789 system_pods.go:89] "etcd-kubenet-656031" [a7f1f343-ff07-4a56-ad34-f993fcf3cc19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:53:21.009896 1126789 system_pods.go:89] "kube-apiserver-kubenet-656031" [b50a4678-04b5-4320-91c0-38b5276ecffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:53:21.009941 1126789 system_pods.go:89] "kube-controller-manager-kubenet-656031" [505e77b8-c940-4d92-8b82-113872809e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:53:21.009954 1126789 system_pods.go:89] "kube-proxy-zsf2c" [3e2c15dd-6f6f-4423-8ba7-7ac5e0f85b70] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:53:21.009961 1126789 system_pods.go:89] "kube-scheduler-kubenet-656031" [e0e6197e-2e26-4faa-9fbb-2f3b691ef3fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:53:21.009970 1126789 system_pods.go:89] "storage-provisioner" [c1a84098-378d-44ba-8604-694f058991b3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:53:21.010001 1126789 retry.go:31] will retry after 290.172662ms: missing components: kube-dns, kube-proxy
	I0917 00:53:21.196233 1126789 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-656031" context rescaled to 1 replicas
	I0917 00:53:21.305258 1126789 system_pods.go:86] 8 kube-system pods found
	I0917 00:53:21.305304 1126789 system_pods.go:89] "coredns-66bc5c9577-4l97p" [d3cf8261-8090-4f73-8e3f-15ee6d02602d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:21.305317 1126789 system_pods.go:89] "coredns-66bc5c9577-5mbks" [372bfaef-b7de-4f37-beb3-288facd952aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:21.305328 1126789 system_pods.go:89] "etcd-kubenet-656031" [a7f1f343-ff07-4a56-ad34-f993fcf3cc19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:53:21.305335 1126789 system_pods.go:89] "kube-apiserver-kubenet-656031" [b50a4678-04b5-4320-91c0-38b5276ecffe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:53:21.305346 1126789 system_pods.go:89] "kube-controller-manager-kubenet-656031" [505e77b8-c940-4d92-8b82-113872809e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:53:21.305358 1126789 system_pods.go:89] "kube-proxy-zsf2c" [3e2c15dd-6f6f-4423-8ba7-7ac5e0f85b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:53:21.305367 1126789 system_pods.go:89] "kube-scheduler-kubenet-656031" [e0e6197e-2e26-4faa-9fbb-2f3b691ef3fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:53:21.305375 1126789 system_pods.go:89] "storage-provisioner" [c1a84098-378d-44ba-8604-694f058991b3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:53:21.305496 1126789 system_pods.go:126] duration metric: took 299.199992ms to wait for k8s-apps to be running ...
	I0917 00:53:21.305516 1126789 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:53:21.305570 1126789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:53:21.325384 1126789 system_svc.go:56] duration metric: took 19.855006ms WaitForService to wait for kubelet
	I0917 00:53:21.325474 1126789 kubeadm.go:578] duration metric: took 1.063356207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:53:21.325503 1126789 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:53:21.328626 1126789 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:53:21.328659 1126789 node_conditions.go:123] node cpu capacity is 8
	I0917 00:53:21.328676 1126789 node_conditions.go:105] duration metric: took 3.167227ms to run NodePressure ...
	I0917 00:53:21.328692 1126789 start.go:241] waiting for startup goroutines ...
	I0917 00:53:21.328702 1126789 start.go:246] waiting for cluster config update ...
	I0917 00:53:21.328722 1126789 start.go:255] writing updated cluster config ...
	I0917 00:53:21.329013 1126789 ssh_runner.go:195] Run: rm -f paused
	I0917 00:53:21.334122 1126789 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:53:21.404969 1126789 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4l97p" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 00:53:23.411155 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:25.411548 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:27.412033 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:29.910347 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:31.914298 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:34.411112 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:36.412811 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:38.911408 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:40.911499 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:42.911779 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:45.079702 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:47.412406 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:49.412750 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:51.911873 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:54.412120 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:56.911241 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:53:59.410513 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:01.411551 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:03.912179 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:06.411717 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:08.412255 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:10.910967 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:13.412434 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:15.911816 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:17.912141 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:20.411207 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:22.411955 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:24.910181 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:26.910937 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:28.911624 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:30.912386 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:33.411703 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:35.911458 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:38.411888 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:40.911191 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:43.410525 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:45.911803 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:48.410488 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:50.911019 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:53.411896 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:55.910676 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:54:58.410050 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:00.410297 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:02.410605 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:04.411684 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:06.412348 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:08.910613 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:10.910845 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:13.410486 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:15.911206 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:18.411652 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:20.912755 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:23.411603 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:25.910982 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:28.410631 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:30.410936 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:32.911245 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:35.411094 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:37.911978 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:39.912188 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:42.411149 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:44.412677 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:46.413117 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:48.910691 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:50.911343 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:53.411161 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:55.912571 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:55:58.411998 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:00.910634 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:02.911127 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:04.911271 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:07.410993 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:09.912077 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:12.410957 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:14.412213 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:16.911069 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:18.911482 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:20.911825 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:23.411992 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:25.911476 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:27.912099 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:29.912275 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:32.411238 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:34.911980 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:37.410744 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:39.910733 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:41.911190 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:44.411385 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:46.911551 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:48.912817 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:51.410715 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:53.413573 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:55.911481 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:56:57.912801 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:00.412617 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:02.413282 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:04.911166 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:07.411105 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:09.914815 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:12.410899 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:14.411828 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:16.910795 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	W0917 00:57:18.911692 1126789 pod_ready.go:104] pod "coredns-66bc5c9577-4l97p" is not "Ready", error: <nil>
	I0917 00:57:21.335103 1126789 pod_ready.go:86] duration metric: took 3m59.930066769s for pod "coredns-66bc5c9577-4l97p" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 00:57:21.335144 1126789 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0917 00:57:21.335163 1126789 pod_ready.go:40] duration metric: took 4m0.000987616s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:57:21.336623 1126789 out.go:203] 
	W0917 00:57:21.338028 1126789 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0917 00:57:21.342469 1126789 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (290.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (272.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (4m31.981587625s)

                                                
                                                
-- stdout --
	* [calico-656031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-656031" primary control-plane node in "calico-656031" cluster
	* Pulling base image v0.0.48 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:53:10.435368 1137834 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:53:10.435694 1137834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:53:10.435702 1137834 out.go:374] Setting ErrFile to fd 2...
	I0917 00:53:10.435708 1137834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:53:10.436097 1137834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:53:10.436804 1137834 out.go:368] Setting JSON to false
	I0917 00:53:10.438588 1137834 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12923,"bootTime":1758057468,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:53:10.438757 1137834 start.go:140] virtualization: kvm guest
	I0917 00:53:10.441085 1137834 out.go:179] * [calico-656031] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:53:10.442539 1137834 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:53:10.442573 1137834 notify.go:220] Checking for updates...
	I0917 00:53:10.447354 1137834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:53:10.448475 1137834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:53:10.449504 1137834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0917 00:53:10.450521 1137834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:53:10.451730 1137834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:53:10.453770 1137834 config.go:182] Loaded profile config "bridge-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:53:10.453900 1137834 config.go:182] Loaded profile config "enable-default-cni-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:53:10.454028 1137834 config.go:182] Loaded profile config "kubenet-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:53:10.454147 1137834 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:53:10.484540 1137834 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:53:10.484651 1137834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:53:10.556032 1137834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:53:10.544348296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:53:10.556174 1137834 docker.go:318] overlay module found
	I0917 00:53:10.559450 1137834 out.go:179] * Using the docker driver based on user configuration
	I0917 00:53:10.560921 1137834 start.go:304] selected driver: docker
	I0917 00:53:10.560946 1137834 start.go:918] validating driver "docker" against <nil>
	I0917 00:53:10.560964 1137834 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:53:10.561763 1137834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:53:10.638669 1137834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:53:10.626760313 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:53:10.638856 1137834 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:53:10.639185 1137834 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:53:10.640934 1137834 out.go:179] * Using Docker driver with root privileges
	I0917 00:53:10.642218 1137834 cni.go:84] Creating CNI manager for "calico"
	I0917 00:53:10.642243 1137834 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0917 00:53:10.642383 1137834 start.go:348] cluster config:
	{Name:calico-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0917 00:53:10.643810 1137834 out.go:179] * Starting "calico-656031" primary control-plane node in "calico-656031" cluster
	I0917 00:53:10.644925 1137834 cache.go:123] Beginning downloading kic base image for docker with docker
	I0917 00:53:10.647128 1137834 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:53:10.649200 1137834 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:53:10.649253 1137834 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0917 00:53:10.649274 1137834 cache.go:58] Caching tarball of preloaded images
	I0917 00:53:10.649408 1137834 preload.go:172] Found /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 00:53:10.649406 1137834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:53:10.649420 1137834 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0917 00:53:10.649566 1137834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/config.json ...
	I0917 00:53:10.649598 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/config.json: {Name:mk37ad0664ff82eecdbeb49bbdd046bd2edcada1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:10.674412 1137834 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:53:10.674436 1137834 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:53:10.674452 1137834 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:53:10.674483 1137834 start.go:360] acquireMachinesLock for calico-656031: {Name:mk5b7945a1d5ba79e617dd67884c5f2612ee85b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:53:10.674595 1137834 start.go:364] duration metric: took 91.143µs to acquireMachinesLock for "calico-656031"
	I0917 00:53:10.674632 1137834 start.go:93] Provisioning new machine with config: &{Name:calico-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:53:10.674727 1137834 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:53:10.677290 1137834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:53:10.677560 1137834 start.go:159] libmachine.API.Create for "calico-656031" (driver="docker")
	I0917 00:53:10.677597 1137834 client.go:168] LocalClient.Create starting
	I0917 00:53:10.677676 1137834 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem
	I0917 00:53:10.677711 1137834 main.go:141] libmachine: Decoding PEM data...
	I0917 00:53:10.677730 1137834 main.go:141] libmachine: Parsing certificate...
	I0917 00:53:10.677801 1137834 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem
	I0917 00:53:10.677828 1137834 main.go:141] libmachine: Decoding PEM data...
	I0917 00:53:10.677844 1137834 main.go:141] libmachine: Parsing certificate...
	I0917 00:53:10.678287 1137834 cli_runner.go:164] Run: docker network inspect calico-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:53:10.701009 1137834 cli_runner.go:211] docker network inspect calico-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:53:10.701100 1137834 network_create.go:284] running [docker network inspect calico-656031] to gather additional debugging logs...
	I0917 00:53:10.701123 1137834 cli_runner.go:164] Run: docker network inspect calico-656031
	W0917 00:53:10.722774 1137834 cli_runner.go:211] docker network inspect calico-656031 returned with exit code 1
	I0917 00:53:10.722809 1137834 network_create.go:287] error running [docker network inspect calico-656031]: docker network inspect calico-656031: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-656031 not found
	I0917 00:53:10.722826 1137834 network_create.go:289] output of [docker network inspect calico-656031]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-656031 not found
	
	** /stderr **
	I0917 00:53:10.722955 1137834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:53:10.744075 1137834 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab651df73000 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:63:f8:73:0d:ee} reservation:<nil>}
	I0917 00:53:10.744788 1137834 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-91db5a27742d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:6c:9c:db:5a:d4} reservation:<nil>}
	I0917 00:53:10.745792 1137834 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0515bd298a94 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:91:5b:dc:7a:d8} reservation:<nil>}
	I0917 00:53:10.746764 1137834 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3934825e8e9a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:a4:a0:93:30:ee} reservation:<nil>}
	I0917 00:53:10.747670 1137834 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b5745d1cb4ca IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:ee:e8:cd:f9:ec} reservation:<nil>}
	I0917 00:53:10.751383 1137834 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-391424007d49 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:46:b4:0c:9e:16:b7} reservation:<nil>}
	I0917 00:53:10.752530 1137834 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003f9fe0}
	I0917 00:53:10.752562 1137834 network_create.go:124] attempt to create docker network calico-656031 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 00:53:10.752618 1137834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-656031 calico-656031
	I0917 00:53:10.824443 1137834 network_create.go:108] docker network calico-656031 192.168.103.0/24 created
	I0917 00:53:10.824479 1137834 kic.go:121] calculated static IP "192.168.103.2" for the "calico-656031" container
	I0917 00:53:10.824543 1137834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:53:10.847167 1137834 cli_runner.go:164] Run: docker volume create calico-656031 --label name.minikube.sigs.k8s.io=calico-656031 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:53:10.869222 1137834 oci.go:103] Successfully created a docker volume calico-656031
	I0917 00:53:10.869352 1137834 cli_runner.go:164] Run: docker run --rm --name calico-656031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-656031 --entrypoint /usr/bin/test -v calico-656031:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:53:11.419936 1137834 oci.go:107] Successfully prepared a docker volume calico-656031
	I0917 00:53:11.419970 1137834 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:53:11.419996 1137834 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:53:11.420074 1137834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-656031:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:53:15.565431 1137834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-656031:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.145305397s)
	I0917 00:53:15.565464 1137834 kic.go:203] duration metric: took 4.145465638s to extract preloaded images to volume ...
	W0917 00:53:15.565548 1137834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:53:15.565579 1137834 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:53:15.565623 1137834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:53:15.654222 1137834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-656031 --name calico-656031 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-656031 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-656031 --network calico-656031 --ip 192.168.103.2 --volume calico-656031:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:53:15.978344 1137834 cli_runner.go:164] Run: docker container inspect calico-656031 --format={{.State.Running}}
	I0917 00:53:15.999177 1137834 cli_runner.go:164] Run: docker container inspect calico-656031 --format={{.State.Status}}
	I0917 00:53:16.022651 1137834 cli_runner.go:164] Run: docker exec calico-656031 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:53:16.073859 1137834 oci.go:144] the created container "calico-656031" has a running status.
	I0917 00:53:16.073925 1137834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa...
	I0917 00:53:16.512296 1137834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:53:16.543276 1137834 cli_runner.go:164] Run: docker container inspect calico-656031 --format={{.State.Status}}
	I0917 00:53:16.562630 1137834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:53:16.562649 1137834 kic_runner.go:114] Args: [docker exec --privileged calico-656031 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:53:16.620552 1137834 cli_runner.go:164] Run: docker container inspect calico-656031 --format={{.State.Status}}
	I0917 00:53:16.640378 1137834 machine.go:93] provisionDockerMachine start ...
	I0917 00:53:16.640477 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:16.660165 1137834 main.go:141] libmachine: Using SSH client type: native
	I0917 00:53:16.660500 1137834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I0917 00:53:16.660523 1137834 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:53:16.804207 1137834 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-656031
	
	I0917 00:53:16.804240 1137834 ubuntu.go:182] provisioning hostname "calico-656031"
	I0917 00:53:16.804305 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:16.824355 1137834 main.go:141] libmachine: Using SSH client type: native
	I0917 00:53:16.824659 1137834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I0917 00:53:16.824682 1137834 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-656031 && echo "calico-656031" | sudo tee /etc/hostname
	I0917 00:53:16.980433 1137834 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-656031
	
	I0917 00:53:16.980536 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:16.999050 1137834 main.go:141] libmachine: Using SSH client type: native
	I0917 00:53:16.999320 1137834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I0917 00:53:16.999351 1137834 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-656031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-656031/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-656031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:53:17.143266 1137834 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:53:17.143300 1137834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-661878/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-661878/.minikube}
	I0917 00:53:17.143360 1137834 ubuntu.go:190] setting up certificates
	I0917 00:53:17.143380 1137834 provision.go:84] configureAuth start
	I0917 00:53:17.143446 1137834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-656031
	I0917 00:53:17.164553 1137834 provision.go:143] copyHostCerts
	I0917 00:53:17.164619 1137834 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem, removing ...
	I0917 00:53:17.164635 1137834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem
	I0917 00:53:17.164720 1137834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/ca.pem (1078 bytes)
	I0917 00:53:17.164921 1137834 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem, removing ...
	I0917 00:53:17.164957 1137834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem
	I0917 00:53:17.165021 1137834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/cert.pem (1123 bytes)
	I0917 00:53:17.165121 1137834 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem, removing ...
	I0917 00:53:17.165132 1137834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem
	I0917 00:53:17.165177 1137834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-661878/.minikube/key.pem (1679 bytes)
	I0917 00:53:17.165256 1137834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem org=jenkins.calico-656031 san=[127.0.0.1 192.168.103.2 calico-656031 localhost minikube]
	I0917 00:53:17.493223 1137834 provision.go:177] copyRemoteCerts
	I0917 00:53:17.493282 1137834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:53:17.493323 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:17.511371 1137834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa Username:docker}
	I0917 00:53:17.610831 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:53:17.638461 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:53:17.670610 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:53:17.706550 1137834 provision.go:87] duration metric: took 563.150341ms to configureAuth
	I0917 00:53:17.706580 1137834 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:53:17.706845 1137834 config.go:182] Loaded profile config "calico-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:53:17.706945 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:17.728162 1137834 main.go:141] libmachine: Using SSH client type: native
	I0917 00:53:17.728373 1137834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I0917 00:53:17.728384 1137834 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 00:53:17.878274 1137834 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 00:53:17.878303 1137834 ubuntu.go:71] root file system type: overlay
	I0917 00:53:17.878462 1137834 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 00:53:17.878535 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:17.899086 1137834 main.go:141] libmachine: Using SSH client type: native
	I0917 00:53:17.899348 1137834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I0917 00:53:17.899461 1137834 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 00:53:18.057527 1137834 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 00:53:18.057610 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:18.079962 1137834 main.go:141] libmachine: Using SSH client type: native
	I0917 00:53:18.080184 1137834 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33157 <nil> <nil>}
	I0917 00:53:18.080205 1137834 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 00:53:19.211153 1137834 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-17 00:53:18.054292320 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 00:53:19.211186 1137834 machine.go:96] duration metric: took 2.57078383s to provisionDockerMachine
	I0917 00:53:19.211201 1137834 client.go:171] duration metric: took 8.533594358s to LocalClient.Create
	I0917 00:53:19.211223 1137834 start.go:167] duration metric: took 8.533664495s to libmachine.API.Create "calico-656031"
	I0917 00:53:19.211235 1137834 start.go:293] postStartSetup for "calico-656031" (driver="docker")
	I0917 00:53:19.211247 1137834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:53:19.211311 1137834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:53:19.211360 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:19.230836 1137834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa Username:docker}
	I0917 00:53:19.337728 1137834 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:53:19.341949 1137834 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:53:19.341992 1137834 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:53:19.342006 1137834 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:53:19.342014 1137834 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:53:19.342031 1137834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/addons for local assets ...
	I0917 00:53:19.342164 1137834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-661878/.minikube/files for local assets ...
	I0917 00:53:19.342315 1137834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem -> 6653992.pem in /etc/ssl/certs
	I0917 00:53:19.342445 1137834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:53:19.364386 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:53:19.403109 1137834 start.go:296] duration metric: took 191.856182ms for postStartSetup
	I0917 00:53:19.403613 1137834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-656031
	I0917 00:53:19.422880 1137834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/config.json ...
	I0917 00:53:19.423251 1137834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:53:19.423305 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:19.442611 1137834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa Username:docker}
	I0917 00:53:19.537935 1137834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:53:19.543630 1137834 start.go:128] duration metric: took 8.868879347s to createHost
	I0917 00:53:19.543663 1137834 start.go:83] releasing machines lock for "calico-656031", held for 8.869046684s
	I0917 00:53:19.543746 1137834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-656031
	I0917 00:53:19.561347 1137834 ssh_runner.go:195] Run: cat /version.json
	I0917 00:53:19.561414 1137834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:53:19.561500 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:19.561420 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:19.581726 1137834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa Username:docker}
	I0917 00:53:19.582354 1137834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa Username:docker}
	I0917 00:53:19.762093 1137834 ssh_runner.go:195] Run: systemctl --version
	I0917 00:53:19.766885 1137834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:53:19.771596 1137834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 00:53:19.803483 1137834 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:53:19.803567 1137834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:53:19.832102 1137834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:53:19.832135 1137834 start.go:495] detecting cgroup driver to use...
	I0917 00:53:19.832172 1137834 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:53:19.832324 1137834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:53:19.850440 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0917 00:53:19.862987 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 00:53:19.875556 1137834 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0917 00:53:19.875616 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0917 00:53:19.887119 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:53:19.899813 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 00:53:19.913109 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 00:53:19.924539 1137834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:53:19.935287 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 00:53:19.947341 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 00:53:19.961148 1137834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 00:53:19.974142 1137834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:53:19.984296 1137834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:53:19.995981 1137834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:53:20.080513 1137834 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 00:53:20.164075 1137834 start.go:495] detecting cgroup driver to use...
	I0917 00:53:20.164130 1137834 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:53:20.164192 1137834 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 00:53:20.179774 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:53:20.194133 1137834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:53:20.213859 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:53:20.227972 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 00:53:20.243111 1137834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:53:20.267345 1137834 ssh_runner.go:195] Run: which cri-dockerd
	I0917 00:53:20.273388 1137834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 00:53:20.293321 1137834 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0917 00:53:20.329796 1137834 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 00:53:20.448447 1137834 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 00:53:20.576597 1137834 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0917 00:53:20.576734 1137834 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0917 00:53:20.608576 1137834 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0917 00:53:20.628902 1137834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:53:20.748214 1137834 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 00:53:21.557258 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:53:21.571251 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 00:53:21.586264 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:53:21.601253 1137834 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 00:53:21.686629 1137834 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 00:53:21.779208 1137834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:53:21.849767 1137834 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 00:53:21.871316 1137834 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0917 00:53:21.883670 1137834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:53:21.965101 1137834 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 00:53:22.049101 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 00:53:22.065130 1137834 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 00:53:22.065189 1137834 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 00:53:22.069815 1137834 start.go:563] Will wait 60s for crictl version
	I0917 00:53:22.069887 1137834 ssh_runner.go:195] Run: which crictl
	I0917 00:53:22.074201 1137834 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:53:22.114226 1137834 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0917 00:53:22.114297 1137834 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:53:22.141639 1137834 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 00:53:22.170408 1137834 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0917 00:53:22.170504 1137834 cli_runner.go:164] Run: docker network inspect calico-656031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:53:22.194092 1137834 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 00:53:22.199872 1137834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:53:22.217660 1137834 kubeadm.go:875] updating cluster {Name:calico-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:53:22.217816 1137834 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0917 00:53:22.217888 1137834 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:53:22.246174 1137834 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:53:22.246207 1137834 docker.go:621] Images already preloaded, skipping extraction
	I0917 00:53:22.246265 1137834 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 00:53:22.272572 1137834 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 00:53:22.272610 1137834 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:53:22.272625 1137834 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 docker true true} ...
	I0917 00:53:22.272739 1137834 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-656031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0917 00:53:22.272811 1137834 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 00:53:22.336285 1137834 cni.go:84] Creating CNI manager for "calico"
	I0917 00:53:22.336323 1137834 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:53:22.336363 1137834 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-656031 NodeName:calico-656031 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:53:22.336520 1137834 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-656031"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:53:22.336593 1137834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:53:22.349348 1137834 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:53:22.349423 1137834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:53:22.360440 1137834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 00:53:22.382804 1137834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:53:22.409897 1137834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0917 00:53:22.432494 1137834 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:53:22.436853 1137834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:53:22.450267 1137834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:53:22.522415 1137834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:53:22.546942 1137834 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031 for IP: 192.168.103.2
	I0917 00:53:22.546968 1137834 certs.go:194] generating shared ca certs ...
	I0917 00:53:22.546994 1137834 certs.go:226] acquiring lock for ca certs: {Name:mk24ad2a96dc59b16a9413b27c57b0ccb7d8ca57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:22.547173 1137834 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key
	I0917 00:53:22.547239 1137834 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key
	I0917 00:53:22.547253 1137834 certs.go:256] generating profile certs ...
	I0917 00:53:22.547331 1137834 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/client.key
	I0917 00:53:22.547349 1137834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/client.crt with IP's: []
	I0917 00:53:23.297043 1137834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/client.crt ...
	I0917 00:53:23.297083 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/client.crt: {Name:mke0d42dfc95a99f3cb82697702659d8a152fd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:23.297488 1137834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/client.key ...
	I0917 00:53:23.297513 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/client.key: {Name:mke8a49aa53bf20539e6ceddac858e2402a84d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:23.297643 1137834 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.key.c8119372
	I0917 00:53:23.297669 1137834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.crt.c8119372 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 00:53:23.618526 1137834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.crt.c8119372 ...
	I0917 00:53:23.618555 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.crt.c8119372: {Name:mk967a311c53562165f34385cb9890df593dd5e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:23.618722 1137834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.key.c8119372 ...
	I0917 00:53:23.618741 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.key.c8119372: {Name:mke5dbf64a0269a1f46cabc294e82aca85900d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:23.618842 1137834 certs.go:381] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.crt.c8119372 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.crt
	I0917 00:53:23.618998 1137834 certs.go:385] copying /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.key.c8119372 -> /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.key
	I0917 00:53:23.619091 1137834 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.key
	I0917 00:53:23.619114 1137834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.crt with IP's: []
	I0917 00:53:24.002652 1137834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.crt ...
	I0917 00:53:24.002688 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.crt: {Name:mk2ec8d562e7edbcc4f1a47dc7721f507e899201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:24.002924 1137834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.key ...
	I0917 00:53:24.002947 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.key: {Name:mk46b5ab3dd54031bb37b994e58e50639c1ec1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:24.003214 1137834 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem (1338 bytes)
	W0917 00:53:24.003278 1137834 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399_empty.pem, impossibly tiny 0 bytes
	I0917 00:53:24.003291 1137834 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:53:24.003319 1137834 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:53:24.003346 1137834 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:53:24.003371 1137834 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/certs/key.pem (1679 bytes)
	I0917 00:53:24.003410 1137834 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem (1708 bytes)
	I0917 00:53:24.004098 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:53:24.031208 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:53:24.057561 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:53:24.087733 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:53:24.119525 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 00:53:24.145041 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:53:24.170767 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:53:24.197623 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/calico-656031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:53:24.226112 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/ssl/certs/6653992.pem --> /usr/share/ca-certificates/6653992.pem (1708 bytes)
	I0917 00:53:24.256784 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:53:24.285053 1137834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-661878/.minikube/certs/665399.pem --> /usr/share/ca-certificates/665399.pem (1338 bytes)
	I0917 00:53:24.311536 1137834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:53:24.330987 1137834 ssh_runner.go:195] Run: openssl version
	I0917 00:53:24.336954 1137834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6653992.pem && ln -fs /usr/share/ca-certificates/6653992.pem /etc/ssl/certs/6653992.pem"
	I0917 00:53:24.347728 1137834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6653992.pem
	I0917 00:53:24.351598 1137834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 23:53 /usr/share/ca-certificates/6653992.pem
	I0917 00:53:24.351668 1137834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6653992.pem
	I0917 00:53:24.358599 1137834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6653992.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:53:24.368574 1137834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:53:24.378751 1137834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:53:24.382829 1137834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:53:24.382899 1137834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:53:24.390195 1137834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:53:24.400890 1137834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/665399.pem && ln -fs /usr/share/ca-certificates/665399.pem /etc/ssl/certs/665399.pem"
	I0917 00:53:24.412543 1137834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/665399.pem
	I0917 00:53:24.416436 1137834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 23:53 /usr/share/ca-certificates/665399.pem
	I0917 00:53:24.416495 1137834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/665399.pem
	I0917 00:53:24.423792 1137834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/665399.pem /etc/ssl/certs/51391683.0"
	I0917 00:53:24.434990 1137834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:53:24.438945 1137834 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:53:24.439017 1137834 kubeadm.go:392] StartCluster: {Name:calico-656031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-656031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:53:24.439162 1137834 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 00:53:24.459225 1137834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:53:24.469572 1137834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:53:24.479951 1137834 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:53:24.480008 1137834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:53:24.489773 1137834 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:53:24.489794 1137834 kubeadm.go:157] found existing configuration files:
	
	I0917 00:53:24.489838 1137834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:53:24.499492 1137834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:53:24.499556 1137834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:53:24.509410 1137834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:53:24.519726 1137834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:53:24.519800 1137834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:53:24.529071 1137834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:53:24.538700 1137834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:53:24.538762 1137834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:53:24.548371 1137834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:53:24.559625 1137834 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:53:24.559692 1137834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:53:24.571278 1137834 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:53:24.652232 1137834 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 00:53:24.710971 1137834 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:53:35.808639 1137834 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:53:35.808722 1137834 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:53:35.808837 1137834 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:53:35.808925 1137834 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 00:53:35.808975 1137834 kubeadm.go:310] OS: Linux
	I0917 00:53:35.809040 1137834 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:53:35.809123 1137834 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:53:35.809218 1137834 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:53:35.809276 1137834 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:53:35.809316 1137834 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:53:35.809356 1137834 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:53:35.809395 1137834 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:53:35.809431 1137834 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 00:53:35.809535 1137834 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:53:35.809685 1137834 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:53:35.809814 1137834 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:53:35.809899 1137834 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:53:35.811968 1137834 out.go:252]   - Generating certificates and keys ...
	I0917 00:53:35.812070 1137834 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:53:35.812173 1137834 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:53:35.812265 1137834 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:53:35.812316 1137834 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:53:35.812368 1137834 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:53:35.812444 1137834 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:53:35.812527 1137834 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:53:35.812684 1137834 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-656031 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 00:53:35.812781 1137834 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:53:35.812946 1137834 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-656031 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 00:53:35.813017 1137834 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:53:35.813068 1137834 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:53:35.813111 1137834 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:53:35.813159 1137834 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:53:35.813223 1137834 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:53:35.813291 1137834 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:53:35.813343 1137834 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:53:35.813409 1137834 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:53:35.813453 1137834 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:53:35.813518 1137834 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:53:35.813594 1137834 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:53:35.815091 1137834 out.go:252]   - Booting up control plane ...
	I0917 00:53:35.815191 1137834 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:53:35.815272 1137834 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:53:35.815361 1137834 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:53:35.815455 1137834 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:53:35.815577 1137834 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:53:35.815719 1137834 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:53:35.815839 1137834 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:53:35.815897 1137834 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:53:35.816096 1137834 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:53:35.816266 1137834 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:53:35.816338 1137834 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.739771ms
	I0917 00:53:35.816428 1137834 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:53:35.816555 1137834 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0917 00:53:35.816684 1137834 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:53:35.816818 1137834 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:53:35.816900 1137834 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.721819178s
	I0917 00:53:35.817018 1137834 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.761414506s
	I0917 00:53:35.817137 1137834 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.502365695s
	I0917 00:53:35.817289 1137834 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:53:35.817474 1137834 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:53:35.817543 1137834 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:53:35.817724 1137834 kubeadm.go:310] [mark-control-plane] Marking the node calico-656031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:53:35.817825 1137834 kubeadm.go:310] [bootstrap-token] Using token: fyr4hj.b88vbnneizujgyj3
	I0917 00:53:35.820536 1137834 out.go:252]   - Configuring RBAC rules ...
	I0917 00:53:35.820647 1137834 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:53:35.820735 1137834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:53:35.820852 1137834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:53:35.821017 1137834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:53:35.821117 1137834 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:53:35.821192 1137834 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:53:35.821285 1137834 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:53:35.821324 1137834 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:53:35.821361 1137834 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:53:35.821367 1137834 kubeadm.go:310] 
	I0917 00:53:35.821462 1137834 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:53:35.821473 1137834 kubeadm.go:310] 
	I0917 00:53:35.821562 1137834 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:53:35.821569 1137834 kubeadm.go:310] 
	I0917 00:53:35.821589 1137834 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:53:35.821641 1137834 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:53:35.821687 1137834 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:53:35.821692 1137834 kubeadm.go:310] 
	I0917 00:53:35.821738 1137834 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:53:35.821743 1137834 kubeadm.go:310] 
	I0917 00:53:35.821800 1137834 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:53:35.821814 1137834 kubeadm.go:310] 
	I0917 00:53:35.821865 1137834 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:53:35.821995 1137834 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:53:35.822113 1137834 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:53:35.822123 1137834 kubeadm.go:310] 
	I0917 00:53:35.822221 1137834 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:53:35.822289 1137834 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:53:35.822295 1137834 kubeadm.go:310] 
	I0917 00:53:35.822359 1137834 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fyr4hj.b88vbnneizujgyj3 \
	I0917 00:53:35.822440 1137834 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e \
	I0917 00:53:35.822471 1137834 kubeadm.go:310] 	--control-plane 
	I0917 00:53:35.822484 1137834 kubeadm.go:310] 
	I0917 00:53:35.822555 1137834 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:53:35.822561 1137834 kubeadm.go:310] 
	I0917 00:53:35.822633 1137834 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fyr4hj.b88vbnneizujgyj3 \
	I0917 00:53:35.822745 1137834 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa81889111a47026234b9abf30001d51a9462bec006420d404163720ad63709e 
	I0917 00:53:35.822757 1137834 cni.go:84] Creating CNI manager for "calico"
	I0917 00:53:35.824551 1137834 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0917 00:53:35.826334 1137834 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 00:53:35.826357 1137834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0917 00:53:35.849286 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 00:53:36.848240 1137834 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:53:36.848387 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-656031 minikube.k8s.io/updated_at=2025_09_17T00_53_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=calico-656031 minikube.k8s.io/primary=true
	I0917 00:53:36.848387 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:36.858971 1137834 ops.go:34] apiserver oom_adj: -16
	I0917 00:53:36.929329 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:37.429435 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:37.930094 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:38.430000 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:38.930124 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:39.429703 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:39.930108 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:40.430119 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:40.930350 1137834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:53:41.032770 1137834 kubeadm.go:1105] duration metric: took 4.184441521s to wait for elevateKubeSystemPrivileges
	I0917 00:53:41.032809 1137834 kubeadm.go:394] duration metric: took 16.593798545s to StartCluster
	I0917 00:53:41.032833 1137834 settings.go:142] acquiring lock: {Name:mk17965980d5178c2751d83eb1933be3ac57e811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:41.032952 1137834 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0917 00:53:41.034641 1137834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-661878/kubeconfig: {Name:mk609009f6fceff95c9f72883135342a90d871f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:53:41.034936 1137834 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 00:53:41.035365 1137834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:53:41.035549 1137834 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:53:41.035656 1137834 addons.go:69] Setting storage-provisioner=true in profile "calico-656031"
	I0917 00:53:41.035670 1137834 config.go:182] Loaded profile config "calico-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:53:41.035683 1137834 addons.go:69] Setting default-storageclass=true in profile "calico-656031"
	I0917 00:53:41.035698 1137834 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-656031"
	I0917 00:53:41.035674 1137834 addons.go:238] Setting addon storage-provisioner=true in "calico-656031"
	I0917 00:53:41.035746 1137834 host.go:66] Checking if "calico-656031" exists ...
	I0917 00:53:41.036119 1137834 cli_runner.go:164] Run: docker container inspect calico-656031 --format={{.State.Status}}
	I0917 00:53:41.036538 1137834 cli_runner.go:164] Run: docker container inspect calico-656031 --format={{.State.Status}}
	I0917 00:53:41.037649 1137834 out.go:179] * Verifying Kubernetes components...
	I0917 00:53:41.039080 1137834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:53:41.066208 1137834 addons.go:238] Setting addon default-storageclass=true in "calico-656031"
	I0917 00:53:41.066259 1137834 host.go:66] Checking if "calico-656031" exists ...
	I0917 00:53:41.066762 1137834 cli_runner.go:164] Run: docker container inspect calico-656031 --format={{.State.Status}}
	I0917 00:53:41.070199 1137834 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:53:41.071694 1137834 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:53:41.071714 1137834 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:53:41.071791 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:41.096052 1137834 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:53:41.096079 1137834 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:53:41.096137 1137834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-656031
	I0917 00:53:41.101502 1137834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa Username:docker}
	I0917 00:53:41.120403 1137834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/calico-656031/id_rsa Username:docker}
	I0917 00:53:41.146259 1137834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:53:41.191089 1137834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:53:41.225532 1137834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:53:41.244572 1137834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:53:41.379492 1137834 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0917 00:53:41.380737 1137834 node_ready.go:35] waiting up to 15m0s for node "calico-656031" to be "Ready" ...
	I0917 00:53:41.392468 1137834 node_ready.go:49] node "calico-656031" is "Ready"
	I0917 00:53:41.392498 1137834 node_ready.go:38] duration metric: took 11.730434ms for node "calico-656031" to be "Ready" ...
	I0917 00:53:41.392519 1137834 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:53:41.392617 1137834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:53:41.722263 1137834 api_server.go:72] duration metric: took 687.278042ms to wait for apiserver process to appear ...
	I0917 00:53:41.722292 1137834 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:53:41.722313 1137834 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0917 00:53:41.730501 1137834 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0917 00:53:41.731832 1137834 api_server.go:141] control plane version: v1.34.0
	I0917 00:53:41.731999 1137834 api_server.go:131] duration metric: took 9.695554ms to wait for apiserver health ...
	I0917 00:53:41.732028 1137834 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:53:41.735775 1137834 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:53:41.737542 1137834 system_pods.go:59] 10 kube-system pods found
	I0917 00:53:41.737637 1137834 system_pods.go:61] "calico-kube-controllers-59556d9b4c-mqvdp" [76b88cef-0e16-4568-875e-a518fc6b66f9] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0917 00:53:41.737730 1137834 system_pods.go:61] "calico-node-9r6kt" [aa09336e-c07a-41cf-bb3d-3c1098498037] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0917 00:53:41.737807 1137834 system_pods.go:61] "coredns-66bc5c9577-dbzfd" [29b34cdf-a069-4eca-939a-6bff5ecbbd07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:41.737686 1137834 addons.go:514] duration metric: took 702.15354ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:53:41.737878 1137834 system_pods.go:61] "coredns-66bc5c9577-nw7hb" [fb83d74e-d141-4f55-bcb7-57b16aed2407] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:41.737898 1137834 system_pods.go:61] "etcd-calico-656031" [30f3117b-dd52-452a-9b0f-06bb8b845538] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:53:41.737952 1137834 system_pods.go:61] "kube-apiserver-calico-656031" [3fabe1ca-025f-4572-a8e4-11c19cdcb2b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:53:41.737973 1137834 system_pods.go:61] "kube-controller-manager-calico-656031" [2ab66156-fb1c-4d85-8e3c-68a19430114b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:53:41.737992 1137834 system_pods.go:61] "kube-proxy-l2wpq" [8d2b90e3-cbfc-491c-9ec3-643f4ee1a330] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:53:41.738021 1137834 system_pods.go:61] "kube-scheduler-calico-656031" [8be7a7d2-8cd9-4503-884b-28c507751ac9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:53:41.738053 1137834 system_pods.go:61] "storage-provisioner" [528f8dc9-b39b-4cda-baf9-46b16682e49b] Pending
	I0917 00:53:41.738077 1137834 system_pods.go:74] duration metric: took 6.030265ms to wait for pod list to return data ...
	I0917 00:53:41.738096 1137834 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:53:41.740956 1137834 default_sa.go:45] found service account: "default"
	I0917 00:53:41.740981 1137834 default_sa.go:55] duration metric: took 2.855145ms for default service account to be created ...
	I0917 00:53:41.740993 1137834 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:53:41.745494 1137834 system_pods.go:86] 10 kube-system pods found
	I0917 00:53:41.745767 1137834 system_pods.go:89] "calico-kube-controllers-59556d9b4c-mqvdp" [76b88cef-0e16-4568-875e-a518fc6b66f9] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0917 00:53:41.745787 1137834 system_pods.go:89] "calico-node-9r6kt" [aa09336e-c07a-41cf-bb3d-3c1098498037] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0917 00:53:41.745806 1137834 system_pods.go:89] "coredns-66bc5c9577-dbzfd" [29b34cdf-a069-4eca-939a-6bff5ecbbd07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:41.745815 1137834 system_pods.go:89] "coredns-66bc5c9577-nw7hb" [fb83d74e-d141-4f55-bcb7-57b16aed2407] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:41.745823 1137834 system_pods.go:89] "etcd-calico-656031" [30f3117b-dd52-452a-9b0f-06bb8b845538] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:53:41.745833 1137834 system_pods.go:89] "kube-apiserver-calico-656031" [3fabe1ca-025f-4572-a8e4-11c19cdcb2b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:53:41.745842 1137834 system_pods.go:89] "kube-controller-manager-calico-656031" [2ab66156-fb1c-4d85-8e3c-68a19430114b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:53:41.745850 1137834 system_pods.go:89] "kube-proxy-l2wpq" [8d2b90e3-cbfc-491c-9ec3-643f4ee1a330] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:53:41.745859 1137834 system_pods.go:89] "kube-scheduler-calico-656031" [8be7a7d2-8cd9-4503-884b-28c507751ac9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:53:41.745865 1137834 system_pods.go:89] "storage-provisioner" [528f8dc9-b39b-4cda-baf9-46b16682e49b] Pending
	I0917 00:53:41.745896 1137834 retry.go:31] will retry after 228.791032ms: missing components: kube-dns, kube-proxy
	I0917 00:53:41.883950 1137834 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-656031" context rescaled to 1 replicas
	I0917 00:53:41.979206 1137834 system_pods.go:86] 10 kube-system pods found
	I0917 00:53:41.979245 1137834 system_pods.go:89] "calico-kube-controllers-59556d9b4c-mqvdp" [76b88cef-0e16-4568-875e-a518fc6b66f9] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0917 00:53:41.979257 1137834 system_pods.go:89] "calico-node-9r6kt" [aa09336e-c07a-41cf-bb3d-3c1098498037] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0917 00:53:41.979266 1137834 system_pods.go:89] "coredns-66bc5c9577-dbzfd" [29b34cdf-a069-4eca-939a-6bff5ecbbd07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:41.979279 1137834 system_pods.go:89] "coredns-66bc5c9577-nw7hb" [fb83d74e-d141-4f55-bcb7-57b16aed2407] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:41.979287 1137834 system_pods.go:89] "etcd-calico-656031" [30f3117b-dd52-452a-9b0f-06bb8b845538] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:53:41.979298 1137834 system_pods.go:89] "kube-apiserver-calico-656031" [3fabe1ca-025f-4572-a8e4-11c19cdcb2b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:53:41.979311 1137834 system_pods.go:89] "kube-controller-manager-calico-656031" [2ab66156-fb1c-4d85-8e3c-68a19430114b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:53:41.979320 1137834 system_pods.go:89] "kube-proxy-l2wpq" [8d2b90e3-cbfc-491c-9ec3-643f4ee1a330] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:53:41.979331 1137834 system_pods.go:89] "kube-scheduler-calico-656031" [8be7a7d2-8cd9-4503-884b-28c507751ac9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:53:41.979339 1137834 system_pods.go:89] "storage-provisioner" [528f8dc9-b39b-4cda-baf9-46b16682e49b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:53:41.979363 1137834 retry.go:31] will retry after 331.330093ms: missing components: kube-dns, kube-proxy
	I0917 00:53:42.316664 1137834 system_pods.go:86] 10 kube-system pods found
	I0917 00:53:42.316736 1137834 system_pods.go:89] "calico-kube-controllers-59556d9b4c-mqvdp" [76b88cef-0e16-4568-875e-a518fc6b66f9] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0917 00:53:42.316754 1137834 system_pods.go:89] "calico-node-9r6kt" [aa09336e-c07a-41cf-bb3d-3c1098498037] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0917 00:53:42.316769 1137834 system_pods.go:89] "coredns-66bc5c9577-dbzfd" [29b34cdf-a069-4eca-939a-6bff5ecbbd07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:42.316779 1137834 system_pods.go:89] "coredns-66bc5c9577-nw7hb" [fb83d74e-d141-4f55-bcb7-57b16aed2407] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:53:42.316791 1137834 system_pods.go:89] "etcd-calico-656031" [30f3117b-dd52-452a-9b0f-06bb8b845538] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:53:42.316804 1137834 system_pods.go:89] "kube-apiserver-calico-656031" [3fabe1ca-025f-4572-a8e4-11c19cdcb2b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:53:42.316815 1137834 system_pods.go:89] "kube-controller-manager-calico-656031" [2ab66156-fb1c-4d85-8e3c-68a19430114b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:53:42.316824 1137834 system_pods.go:89] "kube-proxy-l2wpq" [8d2b90e3-cbfc-491c-9ec3-643f4ee1a330] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:53:42.316832 1137834 system_pods.go:89] "kube-scheduler-calico-656031" [8be7a7d2-8cd9-4503-884b-28c507751ac9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:53:42.316841 1137834 system_pods.go:89] "storage-provisioner" [528f8dc9-b39b-4cda-baf9-46b16682e49b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:53:42.316885 1137834 system_pods.go:126] duration metric: took 575.884248ms to wait for k8s-apps to be running ...
	I0917 00:53:42.316923 1137834 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:53:42.316986 1137834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:53:42.333563 1137834 system_svc.go:56] duration metric: took 16.64737ms WaitForService to wait for kubelet
	I0917 00:53:42.333603 1137834 kubeadm.go:578] duration metric: took 1.29861951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:53:42.333632 1137834 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:53:42.337584 1137834 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:53:42.337612 1137834 node_conditions.go:123] node cpu capacity is 8
	I0917 00:53:42.337628 1137834 node_conditions.go:105] duration metric: took 3.989533ms to run NodePressure ...
	I0917 00:53:42.337644 1137834 start.go:241] waiting for startup goroutines ...
	I0917 00:53:42.337654 1137834 start.go:246] waiting for cluster config update ...
	I0917 00:53:42.337668 1137834 start.go:255] writing updated cluster config ...
	I0917 00:53:42.338147 1137834 ssh_runner.go:195] Run: rm -f paused
	I0917 00:53:42.342892 1137834 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:53:42.416546 1137834 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dbzfd" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 00:53:44.422977 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:53:46.423937 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:53:48.922606 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:53:50.923014 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:53:53.422391 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:53:55.922642 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:53:58.421401 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:00.421801 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:02.422770 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:04.922488 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:07.423239 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:09.923040 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:12.422538 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:14.422827 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:16.922784 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:19.424184 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:21.922459 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:24.423015 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:26.921636 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:28.922025 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:30.922484 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:32.923627 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:35.422416 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:37.423535 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:39.923437 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:42.421415 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:44.421763 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:46.422061 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:48.922978 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:51.421825 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:53.422466 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:55.921751 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:54:58.422630 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:00.923110 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:03.422506 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:05.423363 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:07.921841 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:10.422686 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:12.422837 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:14.922454 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:17.422687 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:19.424792 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:21.923854 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:24.421806 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:26.423033 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:28.921793 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:30.922302 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:33.422725 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:35.921732 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:37.923185 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:40.422649 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:42.922480 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:44.922551 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:47.422440 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:49.423186 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:51.921533 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:53.922122 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:55.923596 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:55:58.422596 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:00.922158 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:03.423197 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:05.423292 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:07.922270 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:09.922322 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:12.422634 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:14.424237 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:16.922192 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:18.922878 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:21.421547 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:23.423014 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:25.921977 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:27.923594 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:30.421719 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:32.422817 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:34.923343 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:37.422221 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:39.422782 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:41.423643 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:43.922294 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:45.922791 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:47.923518 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:50.422140 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:52.922164 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:54.922304 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:56.922617 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:56:58.924507 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:01.422312 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:03.423770 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:05.922402 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:07.926124 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:10.423121 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:12.922694 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:15.423367 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:17.923084 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:20.421722 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:22.424190 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:24.922321 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:26.922648 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:28.923934 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:31.425008 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:33.922308 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:35.923212 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:37.924311 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	W0917 00:57:40.422438 1137834 pod_ready.go:104] pod "coredns-66bc5c9577-dbzfd" is not "Ready", error: <nil>
	I0917 00:57:42.344079 1137834 pod_ready.go:86] duration metric: took 3m59.927489229s for pod "coredns-66bc5c9577-dbzfd" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 00:57:42.344130 1137834 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0917 00:57:42.344150 1137834 pod_ready.go:40] duration metric: took 4m0.001160415s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:57:42.346037 1137834 out.go:203] 
	W0917 00:57:42.347529 1137834 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0917 00:57:42.349196 1137834 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (272.01s)

                                                
                                    

Test pass (292/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.85
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 12.57
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.04
21 TestBinaryMirror 0.81
22 TestOffline 57.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 144.75
29 TestAddons/serial/Volcano 40.8
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 16.02
36 TestAddons/parallel/RegistryCreds 0.59
37 TestAddons/parallel/Ingress 21.19
38 TestAddons/parallel/InspektorGadget 6.21
39 TestAddons/parallel/MetricsServer 5.59
41 TestAddons/parallel/CSI 49.26
42 TestAddons/parallel/Headlamp 20.33
43 TestAddons/parallel/CloudSpanner 5.47
44 TestAddons/parallel/LocalPath 55.59
45 TestAddons/parallel/NvidiaDevicePlugin 6.5
46 TestAddons/parallel/Yakd 11.61
47 TestAddons/parallel/AmdGpuDevicePlugin 6.5
48 TestAddons/StoppedEnableDisable 11.14
49 TestCertOptions 29.05
50 TestCertExpiration 242.51
51 TestDockerFlags 29.55
52 TestForceSystemdFlag 27.36
53 TestForceSystemdEnv 28.8
55 TestKVMDriverInstallOrUpdate 2.31
59 TestErrorSpam/setup 23.04
60 TestErrorSpam/start 0.6
61 TestErrorSpam/status 0.9
62 TestErrorSpam/pause 1.16
63 TestErrorSpam/unpause 1.23
64 TestErrorSpam/stop 10.89
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 40.69
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 46.29
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.25
76 TestFunctional/serial/CacheCmd/cache/add_local 1.43
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 48.57
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.01
87 TestFunctional/serial/LogsFileCmd 1.04
88 TestFunctional/serial/InvalidService 4.44
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 13.38
92 TestFunctional/parallel/DryRun 0.38
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.93
98 TestFunctional/parallel/ServiceCmdConnect 8.67
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 33.01
102 TestFunctional/parallel/SSHCmd 0.52
103 TestFunctional/parallel/CpCmd 1.79
104 TestFunctional/parallel/MySQL 23.73
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.77
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
114 TestFunctional/parallel/License 0.38
115 TestFunctional/parallel/DockerEnv/bash 1.1
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
119 TestFunctional/parallel/Version/short 0.06
120 TestFunctional/parallel/Version/components 0.55
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.82
126 TestFunctional/parallel/ImageCommands/Setup 1.86
127 TestFunctional/parallel/ServiceCmd/DeployApp 18.15
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.23
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
140 TestFunctional/parallel/ServiceCmd/List 0.53
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
143 TestFunctional/parallel/ServiceCmd/Format 0.38
144 TestFunctional/parallel/ServiceCmd/URL 0.34
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
152 TestFunctional/parallel/MountCmd/any-port 7.6
153 TestFunctional/parallel/ProfileCmd/profile_list 0.39
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
155 TestFunctional/parallel/MountCmd/specific-port 2.08
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 277.37
168 TestMultiControlPlane/serial/NodeLabels 0.07
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.74
178 TestMultiControlPlane/serial/StopCluster 21.7
182 TestImageBuild/serial/Setup 22.35
183 TestImageBuild/serial/NormalBuild 1.08
184 TestImageBuild/serial/BuildWithBuildArg 0.67
185 TestImageBuild/serial/BuildWithDockerIgnore 0.47
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.5
190 TestJSONOutput/start/Command 36.38
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.47
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.44
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 10.8
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.2
215 TestKicCustomNetwork/create_custom_network 23.67
216 TestKicCustomNetwork/use_default_bridge_network 23.27
217 TestKicExistingNetwork 24.49
218 TestKicCustomSubnet 24.04
219 TestKicStaticIP 23.87
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 51.69
224 TestMountStart/serial/StartWithMountFirst 9.12
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 8.73
227 TestMountStart/serial/VerifyMountSecond 0.25
228 TestMountStart/serial/DeleteFirst 1.52
229 TestMountStart/serial/VerifyMountPostDelete 0.25
230 TestMountStart/serial/Stop 1.18
231 TestMountStart/serial/RestartStopped 9.52
232 TestMountStart/serial/VerifyMountPostStop 0.25
235 TestMultiNode/serial/FreshStart2Nodes 58.39
236 TestMultiNode/serial/DeployApp2Nodes 41.13
237 TestMultiNode/serial/PingHostFrom2Pods 0.76
238 TestMultiNode/serial/AddNode 13.65
239 TestMultiNode/serial/MultiNodeLabels 0.07
240 TestMultiNode/serial/ProfileList 0.65
241 TestMultiNode/serial/CopyFile 9.68
242 TestMultiNode/serial/StopNode 2.18
243 TestMultiNode/serial/StartAfterStop 8.73
244 TestMultiNode/serial/RestartKeepsNodes 73.51
245 TestMultiNode/serial/DeleteNode 5.18
246 TestMultiNode/serial/StopMultiNode 21.59
247 TestMultiNode/serial/RestartMultiNode 46.68
248 TestMultiNode/serial/ValidateNameConflict 25.79
253 TestPreload 99.81
255 TestScheduledStopUnix 95.14
256 TestSkaffold 80.71
258 TestInsufficientStorage 9.89
259 TestRunningBinaryUpgrade 74.91
261 TestKubernetesUpgrade 341.98
262 TestMissingContainerUpgrade 108.42
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
265 TestStoppedBinaryUpgrade/Setup 3.19
266 TestNoKubernetes/serial/StartWithK8s 42.02
267 TestStoppedBinaryUpgrade/Upgrade 80.14
268 TestNoKubernetes/serial/StartWithStopK8s 18.75
269 TestNoKubernetes/serial/Start 8.24
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
271 TestNoKubernetes/serial/ProfileList 1.34
272 TestNoKubernetes/serial/Stop 1.21
273 TestNoKubernetes/serial/StartNoArgs 8.83
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
284 TestPause/serial/Start 49.46
296 TestPause/serial/SecondStartNoReconfiguration 51.88
297 TestPause/serial/Pause 0.46
298 TestPause/serial/VerifyStatus 0.31
299 TestPause/serial/Unpause 0.52
300 TestPause/serial/PauseAgain 0.53
301 TestPause/serial/DeletePaused 2.2
302 TestPause/serial/VerifyDeletedResources 16.24
304 TestStartStop/group/old-k8s-version/serial/FirstStart 40.74
306 TestStartStop/group/no-preload/serial/FirstStart 51.42
307 TestStartStop/group/old-k8s-version/serial/DeployApp 8.27
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
309 TestStartStop/group/old-k8s-version/serial/Stop 10.85
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/old-k8s-version/serial/SecondStart 122.18
312 TestStartStop/group/no-preload/serial/DeployApp 9.26
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
314 TestStartStop/group/no-preload/serial/Stop 10.83
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/no-preload/serial/SecondStart 157.26
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
320 TestStartStop/group/embed-certs/serial/FirstStart 66.01
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.64
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/old-k8s-version/serial/Pause 3.22
326 TestStartStop/group/newest-cni/serial/FirstStart 31.4
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
329 TestStartStop/group/newest-cni/serial/Stop 10.79
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
331 TestStartStop/group/newest-cni/serial/SecondStart 16.6
332 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
334 TestStartStop/group/embed-certs/serial/DeployApp 9.28
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.21
336 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
337 TestStartStop/group/no-preload/serial/Pause 2.65
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
343 TestNetworkPlugins/group/auto/Start 113.78
344 TestStartStop/group/embed-certs/serial/Stop 10.82
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.29
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.84
347 TestNetworkPlugins/group/kindnet/Start 168.37
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
349 TestStartStop/group/embed-certs/serial/SecondStart 26.1
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
351 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 97.25
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
354 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
355 TestStartStop/group/embed-certs/serial/Pause 2.38
356 TestNetworkPlugins/group/flannel/Start 79.07
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
358 TestNetworkPlugins/group/auto/KubeletFlags 0.27
359 TestNetworkPlugins/group/auto/NetCatPod 10.19
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
362 TestNetworkPlugins/group/auto/DNS 0.15
363 TestNetworkPlugins/group/auto/Localhost 0.13
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.43
365 TestNetworkPlugins/group/auto/HairPin 0.14
366 TestNetworkPlugins/group/enable-default-cni/Start 64.18
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
369 TestNetworkPlugins/group/flannel/NetCatPod 8.2
371 TestNetworkPlugins/group/flannel/DNS 0.16
372 TestNetworkPlugins/group/flannel/Localhost 0.12
373 TestNetworkPlugins/group/flannel/HairPin 0.13
375 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
376 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
377 TestNetworkPlugins/group/kindnet/NetCatPod 8.23
378 TestNetworkPlugins/group/kindnet/DNS 0.14
379 TestNetworkPlugins/group/kindnet/Localhost 0.11
380 TestNetworkPlugins/group/kindnet/HairPin 0.11
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
387 TestNetworkPlugins/group/false/Start 355.81
388 TestNetworkPlugins/group/custom-flannel/Start 64.87
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.19
391 TestNetworkPlugins/group/custom-flannel/DNS 0.14
392 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
393 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
394 TestNetworkPlugins/group/false/KubeletFlags 0.27
395 TestNetworkPlugins/group/false/NetCatPod 8.18
396 TestNetworkPlugins/group/false/DNS 0.14
397 TestNetworkPlugins/group/false/Localhost 0.11
398 TestNetworkPlugins/group/false/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (12.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-003474 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-003474 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.854097306s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0916 23:47:55.941619  665399 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0916 23:47:55.941721  665399 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-003474
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-003474: exit status 85 (64.443119ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-003474 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-003474 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:47:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:47:43.128376  665411 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:47:43.128616  665411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:43.128624  665411 out.go:374] Setting ErrFile to fd 2...
	I0916 23:47:43.128629  665411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:43.128861  665411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	W0916 23:47:43.129006  665411 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21550-661878/.minikube/config/config.json: open /home/jenkins/minikube-integration/21550-661878/.minikube/config/config.json: no such file or directory
	I0916 23:47:43.129490  665411 out.go:368] Setting JSON to true
	I0916 23:47:43.130398  665411 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8995,"bootTime":1758057468,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:47:43.130487  665411 start.go:140] virtualization: kvm guest
	I0916 23:47:43.132615  665411 out.go:99] [download-only-003474] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0916 23:47:43.132763  665411 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 23:47:43.132828  665411 notify.go:220] Checking for updates...
	I0916 23:47:43.134025  665411 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:47:43.135303  665411 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:47:43.136551  665411 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:47:43.137712  665411 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:47:43.138767  665411 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:47:43.140811  665411 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:47:43.141059  665411 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:47:43.163543  665411 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:47:43.163632  665411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:43.216080  665411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-16 23:47:43.205797209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:43.216190  665411 docker.go:318] overlay module found
	I0916 23:47:43.217831  665411 out.go:99] Using the docker driver based on user configuration
	I0916 23:47:43.217871  665411 start.go:304] selected driver: docker
	I0916 23:47:43.217880  665411 start.go:918] validating driver "docker" against <nil>
	I0916 23:47:43.218006  665411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:43.270742  665411 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-16 23:47:43.261318584 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:43.271000  665411 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:47:43.271699  665411 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0916 23:47:43.271915  665411 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:47:43.273564  665411 out.go:171] Using Docker driver with root privileges
	I0916 23:47:43.274753  665411 cni.go:84] Creating CNI manager for ""
	I0916 23:47:43.274826  665411 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 23:47:43.274840  665411 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 23:47:43.274931  665411 start.go:348] cluster config:
	{Name:download-only-003474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-003474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:47:43.276212  665411 out.go:99] Starting "download-only-003474" primary control-plane node in "download-only-003474" cluster
	I0916 23:47:43.276232  665411 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:47:43.277313  665411 out.go:99] Pulling base image v0.0.48 ...
	I0916 23:47:43.277337  665411 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0916 23:47:43.277440  665411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:47:43.295134  665411 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:47:43.295372  665411 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:47:43.295464  665411 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:47:43.672700  665411 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0916 23:47:43.672761  665411 cache.go:58] Caching tarball of preloaded images
	I0916 23:47:43.673006  665411 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0916 23:47:43.675082  665411 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0916 23:47:43.675120  665411 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 23:47:43.777864  665411 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-003474 host does not exist
	  To start a cluster, run: "minikube start -p download-only-003474"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-003474
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (12.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-847548 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-847548 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.568634352s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (12.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0916 23:48:08.925207  665399 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0916 23:48:08.925262  665399 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-847548
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-847548: exit status 85 (64.321696ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-003474 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-003474 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │ 16 Sep 25 23:47 UTC │
	│ delete  │ -p download-only-003474                                                                                                                                                       │ download-only-003474 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │ 16 Sep 25 23:47 UTC │
	│ start   │ -o=json --download-only -p download-only-847548 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-847548 │ jenkins │ v1.37.0 │ 16 Sep 25 23:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:47:56
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:47:56.397811  665793 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:47:56.398130  665793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:56.398141  665793 out.go:374] Setting ErrFile to fd 2...
	I0916 23:47:56.398148  665793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:47:56.398350  665793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:47:56.398888  665793 out.go:368] Setting JSON to true
	I0916 23:47:56.399775  665793 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9008,"bootTime":1758057468,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:47:56.399873  665793 start.go:140] virtualization: kvm guest
	I0916 23:47:56.402007  665793 out.go:99] [download-only-847548] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:47:56.402194  665793 notify.go:220] Checking for updates...
	I0916 23:47:56.403549  665793 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:47:56.404808  665793 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:47:56.406166  665793 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:47:56.407404  665793 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:47:56.408606  665793 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:47:56.410932  665793 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:47:56.411147  665793 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:47:56.433600  665793 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:47:56.433699  665793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:56.486485  665793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-09-16 23:47:56.477681248 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:56.486594  665793 docker.go:318] overlay module found
	I0916 23:47:56.488522  665793 out.go:99] Using the docker driver based on user configuration
	I0916 23:47:56.488564  665793 start.go:304] selected driver: docker
	I0916 23:47:56.488570  665793 start.go:918] validating driver "docker" against <nil>
	I0916 23:47:56.488656  665793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:47:56.541263  665793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-09-16 23:47:56.530769114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:47:56.541448  665793 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:47:56.541989  665793 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0916 23:47:56.542173  665793 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:47:56.544177  665793 out.go:171] Using Docker driver with root privileges
	I0916 23:47:56.545676  665793 cni.go:84] Creating CNI manager for ""
	I0916 23:47:56.545779  665793 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 23:47:56.545795  665793 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 23:47:56.545891  665793 start.go:348] cluster config:
	{Name:download-only-847548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-847548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:47:56.547452  665793 out.go:99] Starting "download-only-847548" primary control-plane node in "download-only-847548" cluster
	I0916 23:47:56.547480  665793 cache.go:123] Beginning downloading kic base image for docker with docker
	I0916 23:47:56.548711  665793 out.go:99] Pulling base image v0.0.48 ...
	I0916 23:47:56.548738  665793 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:47:56.548854  665793 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:47:56.565058  665793 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:47:56.565248  665793 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:47:56.565271  665793 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:47:56.565289  665793 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:47:56.565300  665793 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:47:56.939331  665793 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0916 23:47:56.939369  665793 cache.go:58] Caching tarball of preloaded images
	I0916 23:47:56.939588  665793 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0916 23:47:56.941381  665793 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0916 23:47:56.941399  665793 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 23:47:57.439750  665793 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4?checksum=md5:994a4de1464928e89c992dfd0a962e35 -> /home/jenkins/minikube-integration/21550-661878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-847548 host does not exist
	  To start a cluster, run: "minikube start -p download-only-847548"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-847548
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.04s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-844979 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-844979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-844979
--- PASS: TestDownloadOnlyKic (1.04s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I0916 23:48:10.633126  665399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-538658 --alsologtostderr --binary-mirror http://127.0.0.1:36991 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-538658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-538658
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (57.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-484249 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-484249 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (55.606627488s)
helpers_test.go:175: Cleaning up "offline-docker-484249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-484249
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-484249: (2.203016418s)
--- PASS: TestOffline (57.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-875427
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-875427: exit status 85 (53.94169ms)

                                                
                                                
-- stdout --
	* Profile "addons-875427" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-875427"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-875427
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-875427: exit status 85 (54.48643ms)

                                                
                                                
-- stdout --
	* Profile "addons-875427" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-875427"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (144.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-875427 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-875427 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m24.749783078s)
--- PASS: TestAddons/Setup (144.75s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 15.183407ms
addons_test.go:868: volcano-scheduler stabilized in 15.23413ms
addons_test.go:876: volcano-admission stabilized in 15.292127ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-rgv7h" [9bd12e0d-3d8a-414a-9538-4b0fde78006c] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003801056s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-wrjnb" [f4581492-def3-45e7-8f4f-2758bfbadac5] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003837364s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-ssjc9" [aaf5d95e-f624-454a-866d-682263396b5e] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003933531s
addons_test.go:903: (dbg) Run:  kubectl --context addons-875427 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-875427 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-875427 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [50f0153a-01fe-4c04-9397-a54f148a4c14] Pending
helpers_test.go:352: "test-job-nginx-0" [50f0153a-01fe-4c04-9397-a54f148a4c14] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [50f0153a-01fe-4c04-9397-a54f148a4c14] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003400437s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-875427 addons disable volcano --alsologtostderr -v=1: (11.473136647s)
--- PASS: TestAddons/serial/Volcano (40.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-875427 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-875427 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-875427 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-875427 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [61171206-7498-4590-be06-019f646b57a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [61171206-7498-4590-be06-019f646b57a8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003516532s
addons_test.go:694: (dbg) Run:  kubectl --context addons-875427 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-875427 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-875427 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.814704ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-5wwxm" [96a25d41-d4d7-438a-ab3f-6b7c5e16221b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003760777s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-98j57" [718f54d6-73f3-42fd-8aaa-24bf8ed2c059] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003673997s
addons_test.go:392: (dbg) Run:  kubectl --context addons-875427 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-875427 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-875427 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.307996669s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 ip
2025/09/16 23:51:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.02s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.424101ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-875427
addons_test.go:332: (dbg) Run:  kubectl --context addons-875427 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-875427 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-875427 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-875427 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9915e6a7-da25-41cf-8bf9-14028e05410e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9915e6a7-da25-41cf-8bf9-14028e05410e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.025177017s
I0916 23:52:02.984047  665399 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-875427 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-875427 addons disable ingress-dns --alsologtostderr -v=1: (1.236760629s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-875427 addons disable ingress --alsologtostderr -v=1: (7.631163857s)
--- PASS: TestAddons/parallel/Ingress (21.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-52d27" [3bbde79b-0075-431b-a026-5cd7114bdfd7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00458504s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.695048ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-pr6mz" [86d168fd-6f2e-4e3b-9c7c-c0a9b22f248d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003470196s
addons_test.go:463: (dbg) Run:  kubectl --context addons-875427 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0916 23:51:47.578543  665399 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0916 23:51:47.582102  665399 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0916 23:51:47.582135  665399 kapi.go:107] duration metric: took 3.607021ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.619275ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-875427 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-875427 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [44ea3719-6936-477b-8918-c52b50634b94] Pending
helpers_test.go:352: "task-pv-pod" [44ea3719-6936-477b-8918-c52b50634b94] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [44ea3719-6936-477b-8918-c52b50634b94] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003525968s
addons_test.go:572: (dbg) Run:  kubectl --context addons-875427 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-875427 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-875427 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-875427 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-875427 delete pod task-pv-pod: (1.131496978s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-875427 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-875427 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-875427 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [43463021-d2e5-4ac1-a4f9-d98b0771c1bb] Pending
helpers_test.go:352: "task-pv-pod-restore" [43463021-d2e5-4ac1-a4f9-d98b0771c1bb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [43463021-d2e5-4ac1-a4f9-d98b0771c1bb] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00379381s
addons_test.go:614: (dbg) Run:  kubectl --context addons-875427 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-875427 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-875427 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-875427 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.51641349s)
--- PASS: TestAddons/parallel/CSI (49.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-875427 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-xvbzd" [b0de0fe9-76fc-49f7-b108-6faef1627ce3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-xvbzd" [b0de0fe9-76fc-49f7-b108-6faef1627ce3] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.003268102s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-875427 addons disable headlamp --alsologtostderr -v=1: (5.612219388s)
--- PASS: TestAddons/parallel/Headlamp (20.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-m8jcp" [0a272b46-c520-4e45-8acf-683e3beeee45] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003015796s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-875427 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-875427 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4aa092c0-8788-4ea7-b43e-df9b35095cf7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4aa092c0-8788-4ea7-b43e-df9b35095cf7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4aa092c0-8788-4ea7-b43e-df9b35095cf7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004529355s
addons_test.go:967: (dbg) Run:  kubectl --context addons-875427 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 ssh "cat /opt/local-path-provisioner/pvc-cffbc901-771e-4368-9fb5-014398bd8ffc_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-875427 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-875427 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-875427 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.695278076s)
--- PASS: TestAddons/parallel/LocalPath (55.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rn276" [044c3a62-7045-4b93-95d0-9bc92799a421] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003178466s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2gnnl" [97ab0da4-424e-43ca-84d0-afb339693595] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003482513s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-875427 addons disable yakd --alsologtostderr -v=1: (5.603520873s)
--- PASS: TestAddons/parallel/Yakd (11.61s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-xsg4c" [ff5fd92f-fb8e-4b5c-8e25-a710053a80da] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003138845s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-875427 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-875427
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-875427: (10.893885887s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-875427
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-875427
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-875427
--- PASS: TestAddons/StoppedEnableDisable (11.14s)

                                                
                                    
x
+
TestCertOptions (29.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-456835 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-456835 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (26.205050897s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-456835 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-456835 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-456835 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-456835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-456835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-456835: (2.225585105s)
--- PASS: TestCertOptions (29.05s)

                                                
                                    
x
+
TestCertExpiration (242.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843787 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843787 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (24.744512037s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843787 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843787 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (35.587418034s)
helpers_test.go:175: Cleaning up "cert-expiration-843787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-843787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-843787: (2.175818989s)
--- PASS: TestCertExpiration (242.51s)

                                                
                                    
x
+
TestDockerFlags (29.55s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-174047 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-174047 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.873928271s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-174047 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-174047 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-174047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-174047
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-174047: (2.111011899s)
--- PASS: TestDockerFlags (29.55s)

                                                
                                    
x
+
TestForceSystemdFlag (27.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-591079 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-591079 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.855084016s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-591079 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-591079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-591079
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-591079: (2.181048488s)
--- PASS: TestForceSystemdFlag (27.36s)

                                                
                                    
x
+
TestForceSystemdEnv (28.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-834544 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-834544 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.152444483s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-834544 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-834544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-834544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-834544: (2.276865975s)
--- PASS: TestForceSystemdEnv (28.80s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.31s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0917 00:44:05.900545  665399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 00:44:05.900705  665399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0917 00:44:05.929261  665399 install.go:62] docker-machine-driver-kvm2: exit status 1
W0917 00:44:05.929418  665399 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 00:44:05.929483  665399 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4283177784/001/docker-machine-driver-kvm2
I0917 00:44:06.280561  665399 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4283177784/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc0004a5f20 gz:0xc0004a5f28 tar:0xc0004a5ec0 tar.bz2:0xc0004a5ee0 tar.gz:0xc0004a5ef0 tar.xz:0xc0004a5f00 tar.zst:0xc0004a5f10 tbz2:0xc0004a5ee0 tgz:0xc0004a5ef0 txz:0xc0004a5f00 tzst:0xc0004a5f10 xz:0xc0004a5f30 zip:0xc0004a5f40 zst:0xc0004a5f38] Getters:map[file:0xc0016c0830 http:0xc000072a00 https:0xc000072a50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 00:44:06.280609  665399 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4283177784/001/docker-machine-driver-kvm2
I0917 00:44:07.718489  665399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 00:44:07.718577  665399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0917 00:44:07.746236  665399 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0917 00:44:07.746266  665399 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0917 00:44:07.746330  665399 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 00:44:07.746357  665399 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4283177784/002/docker-machine-driver-kvm2
I0917 00:44:07.805223  665399 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4283177784/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc0004a5f20 gz:0xc0004a5f28 tar:0xc0004a5ec0 tar.bz2:0xc0004a5ee0 tar.gz:0xc0004a5ef0 tar.xz:0xc0004a5f00 tar.zst:0xc0004a5f10 tbz2:0xc0004a5ee0 tgz:0xc0004a5ef0 txz:0xc0004a5f00 tzst:0xc0004a5f10 xz:0xc0004a5f30 zip:0xc0004a5f40 zst:0xc0004a5f38] Getters:map[file:0xc001d97530 http:0xc0006f7e50 https:0xc0006f7ea0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 00:44:07.805272  665399 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4283177784/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (2.31s)

                                                
                                    
x
+
TestErrorSpam/setup (23.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-409718 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-409718 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-409718 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-409718 --driver=docker  --container-runtime=docker: (23.038187867s)
--- PASS: TestErrorSpam/setup (23.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 start --dry-run
--- PASS: TestErrorSpam/start (0.60s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 pause
--- PASS: TestErrorSpam/pause (1.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 unpause
--- PASS: TestErrorSpam/unpause (1.23s)

                                                
                                    
x
+
TestErrorSpam/stop (10.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 stop: (10.709473777s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-409718 --log_dir /tmp/nospam-409718 stop
--- PASS: TestErrorSpam/stop (10.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21550-661878/.minikube/files/etc/test/nested/copy/665399/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-650494 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-650494 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (40.693916927s)
--- PASS: TestFunctional/serial/StartWithProxy (40.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0916 23:54:25.578891  665399 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-650494 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-650494 --alsologtostderr -v=8: (46.290170917s)
functional_test.go:678: soft start took 46.291074903s for "functional-650494" cluster.
I0916 23:55:11.869500  665399 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (46.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-650494 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-650494 /tmp/TestFunctionalserialCacheCmdcacheadd_local1472816441/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cache add minikube-local-cache-test:functional-650494
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-650494 cache add minikube-local-cache-test:functional-650494: (1.097287383s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cache delete minikube-local-cache-test:functional-650494
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-650494
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.945355ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 kubectl -- --context functional-650494 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-650494 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-650494 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0916 23:55:36.259259  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:36.265641  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:36.277057  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:36.298459  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:36.339885  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:36.421330  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:36.582891  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:36.904620  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:37.546732  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:38.828366  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:41.391271  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:46.513471  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:55:56.754955  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-650494 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.571824292s)
functional_test.go:776: restart took 48.571978255s for "functional-650494" cluster.
I0916 23:56:06.196593  665399 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (48.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-650494 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-650494 logs: (1.009057259s)
--- PASS: TestFunctional/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 logs --file /tmp/TestFunctionalserialLogsFileCmd1077384628/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-650494 logs --file /tmp/TestFunctionalserialLogsFileCmd1077384628/001/logs.txt: (1.037975388s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.04s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-650494 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-650494
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-650494: exit status 115 (336.18136ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32131 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-650494 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 config get cpus: exit status 14 (61.808339ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 config get cpus: exit status 14 (61.337954ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-650494 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-650494 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 717947: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-650494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-650494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (164.165226ms)

                                                
                                                
-- stdout --
	* [functional-650494] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 23:56:38.416430  717286 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:38.416563  717286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:38.416574  717286 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:38.416581  717286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:38.416920  717286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:38.417467  717286 out.go:368] Setting JSON to false
	I0916 23:56:38.419049  717286 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9530,"bootTime":1758057468,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:38.419223  717286 start.go:140] virtualization: kvm guest
	I0916 23:56:38.421716  717286 out.go:179] * [functional-650494] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:38.423417  717286 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:38.423448  717286 notify.go:220] Checking for updates...
	I0916 23:56:38.426152  717286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:38.428064  717286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:38.429589  717286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:38.431421  717286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:38.433433  717286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:38.435560  717286 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:56:38.436311  717286 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:38.462620  717286 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:38.462737  717286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:38.520779  717286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-16 23:56:38.510375055 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:38.520956  717286 docker.go:318] overlay module found
	I0916 23:56:38.522783  717286 out.go:179] * Using the docker driver based on existing profile
	I0916 23:56:38.524043  717286 start.go:304] selected driver: docker
	I0916 23:56:38.524059  717286 start.go:918] validating driver "docker" against &{Name:functional-650494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-650494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:38.524141  717286 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:38.525769  717286 out.go:203] 
	W0916 23:56:38.527179  717286 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 23:56:38.528479  717286 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-650494 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-650494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-650494 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (158.950426ms)

                                                
                                                
-- stdout --
	* [functional-650494] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 23:56:38.803235  717597 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:56:38.803356  717597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:38.803367  717597 out.go:374] Setting ErrFile to fd 2...
	I0916 23:56:38.803374  717597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:56:38.803665  717597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0916 23:56:38.804193  717597 out.go:368] Setting JSON to false
	I0916 23:56:38.805281  717597 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9531,"bootTime":1758057468,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:56:38.805395  717597 start.go:140] virtualization: kvm guest
	I0916 23:56:38.807084  717597 out.go:179] * [functional-650494] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0916 23:56:38.808458  717597 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:56:38.808454  717597 notify.go:220] Checking for updates...
	I0916 23:56:38.811464  717597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:56:38.812976  717597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	I0916 23:56:38.814153  717597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	I0916 23:56:38.815457  717597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:56:38.816753  717597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:56:38.818410  717597 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0916 23:56:38.818950  717597 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:56:38.843188  717597 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:56:38.843277  717597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:56:38.898825  717597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-16 23:56:38.887882462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:56:38.899025  717597 docker.go:318] overlay module found
	I0916 23:56:38.901012  717597 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0916 23:56:38.902437  717597 start.go:304] selected driver: docker
	I0916 23:56:38.902457  717597 start.go:918] validating driver "docker" against &{Name:functional-650494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-650494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:56:38.902568  717597 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:56:38.904389  717597 out.go:203] 
	W0916 23:56:38.905334  717597 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 23:56:38.906437  717597 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-650494 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-650494 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bk8rx" [25050f09-c0ef-46ef-a697-baa5d83eaa5c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-bk8rx" [25050f09-c0ef-46ef-a697-baa5d83eaa5c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003774122s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30452
functional_test.go:1680: http://192.168.49.2:30452: success! body:
Request served by hello-node-connect-7d85dfc575-bk8rx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30452
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [513139e5-55d9-4eaa-9c6d-0695d8dd0e77] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004194067s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-650494 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-650494 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-650494 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-650494 apply -f testdata/storage-provisioner/pod.yaml
I0916 23:56:27.100745  665399 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [68c78223-e40d-400b-8165-ad2cedab21a5] Pending
helpers_test.go:352: "sp-pod" [68c78223-e40d-400b-8165-ad2cedab21a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [68c78223-e40d-400b-8165-ad2cedab21a5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00411382s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-650494 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-650494 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-650494 delete -f testdata/storage-provisioner/pod.yaml: (1.12462768s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-650494 apply -f testdata/storage-provisioner/pod.yaml
I0916 23:56:42.477709  665399 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0032f7ce-bbe7-4862-ac07-7f501d814621] Pending
helpers_test.go:352: "sp-pod" [0032f7ce-bbe7-4862-ac07-7f501d814621] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0032f7ce-bbe7-4862-ac07-7f501d814621] Running
2025/09/16 23:56:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004442783s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-650494 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh -n functional-650494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cp functional-650494:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1661516819/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh -n functional-650494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh -n functional-650494 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-650494 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-sd8ns" [fa925bba-9235-4f03-9675-7dc29c074c65] Pending
helpers_test.go:352: "mysql-5bb876957f-sd8ns" [fa925bba-9235-4f03-9675-7dc29c074c65] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-sd8ns" [fa925bba-9235-4f03-9675-7dc29c074c65] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004332575s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-650494 exec mysql-5bb876957f-sd8ns -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-650494 exec mysql-5bb876957f-sd8ns -- mysql -ppassword -e "show databases;": exit status 1 (205.918474ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0916 23:56:32.876595  665399 retry.go:31] will retry after 993.016553ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-650494 exec mysql-5bb876957f-sd8ns -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-650494 exec mysql-5bb876957f-sd8ns -- mysql -ppassword -e "show databases;": exit status 1 (154.439441ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0916 23:56:34.025380  665399 retry.go:31] will retry after 972.701247ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-650494 exec mysql-5bb876957f-sd8ns -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-650494 exec mysql-5bb876957f-sd8ns -- mysql -ppassword -e "show databases;": exit status 1 (133.65095ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0916 23:56:35.134290  665399 retry.go:31] will retry after 1.907880576s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-650494 exec mysql-5bb876957f-sd8ns -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/665399/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo cat /etc/test/nested/copy/665399/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/665399.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo cat /etc/ssl/certs/665399.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/665399.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo cat /usr/share/ca-certificates/665399.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/6653992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo cat /etc/ssl/certs/6653992.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/6653992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo cat /usr/share/ca-certificates/6653992.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-650494 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 ssh "sudo systemctl is-active crio": exit status 1 (298.225545ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-650494 docker-env) && out/minikube-linux-amd64 status -p functional-650494"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-650494 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-650494 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-650494
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-650494
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-650494 image ls --format short --alsologtostderr:
I0916 23:56:44.864267  719466 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:44.864570  719466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:44.864582  719466 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:44.864586  719466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:44.864800  719466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
I0916 23:56:44.865428  719466 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:44.865517  719466 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:44.865932  719466 cli_runner.go:164] Run: docker container inspect functional-650494 --format={{.State.Status}}
I0916 23:56:44.889252  719466 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:44.889341  719466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-650494
I0916 23:56:44.920700  719466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/functional-650494/id_rsa Username:docker}
I0916 23:56:45.027333  719466 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-650494 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ localhost/my-image                          │ functional-650494 │ 62313aa3ca027 │ 1.24MB │
│ docker.io/library/nginx                     │ latest            │ 41f689c209100 │ 192MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ docker.io/library/nginx                     │ alpine            │ 4a86014ec6994 │ 52.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-650494 │ e5652763c60e3 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ docker.io/kicbase/echo-server               │ functional-650494 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-650494 image ls --format table --alsologtostderr:
I0916 23:56:48.936938  721263 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:48.937249  721263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:48.937263  721263 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:48.937269  721263 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:48.937512  721263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
I0916 23:56:48.938135  721263 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:48.938281  721263 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:48.938715  721263 cli_runner.go:164] Run: docker container inspect functional-650494 --format={{.State.Status}}
I0916 23:56:48.956990  721263 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:48.957050  721263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-650494
I0916 23:56:48.974479  721263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/functional-650494/id_rsa Username:docker}
I0916 23:56:49.068983  721263 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-650494 image ls --format json --alsologtostderr:
[{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-650494","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"
id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e5652763c60e314457103e95a11fd0ef8822ff8b42407fd5439ef62eb0a5b770","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-650494"],"size":"30"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5ba
f0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"62313aa3ca027516391eee25a8e51b04d6f6e989ffe34acec7e5eeaac748a1a0","repoDigests":[],"repoTags":["localhost/m
y-image:functional-650494"],"size":"1240000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-650494 image ls --format json --alsologtostderr:
I0916 23:56:48.917637  721252 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:48.917967  721252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:48.917988  721252 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:48.917996  721252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:48.918232  721252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
I0916 23:56:48.918893  721252 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:48.919060  721252 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:48.919612  721252 cli_runner.go:164] Run: docker container inspect functional-650494 --format={{.State.Status}}
I0916 23:56:48.938940  721252 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:48.938990  721252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-650494
I0916 23:56:48.956958  721252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/functional-650494/id_rsa Username:docker}
I0916 23:56:49.050766  721252 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-650494 image ls --format yaml --alsologtostderr:
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-650494
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 62313aa3ca027516391eee25a8e51b04d6f6e989ffe34acec7e5eeaac748a1a0
repoDigests: []
repoTags:
- localhost/my-image:functional-650494
size: "1240000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e5652763c60e314457103e95a11fd0ef8822ff8b42407fd5439ef62eb0a5b770
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-650494
size: "30"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-650494 image ls --format yaml --alsologtostderr:
I0916 23:56:48.698331  721087 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:48.698662  721087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:48.698677  721087 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:48.698682  721087 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:48.699051  721087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
I0916 23:56:48.699794  721087 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:48.699888  721087 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:48.700291  721087 cli_runner.go:164] Run: docker container inspect functional-650494 --format={{.State.Status}}
I0916 23:56:48.719858  721087 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:48.719949  721087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-650494
I0916 23:56:48.739797  721087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/functional-650494/id_rsa Username:docker}
I0916 23:56:48.838587  721087 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 ssh pgrep buildkitd: exit status 1 (295.069262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image build -t localhost/my-image:functional-650494 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-650494 image build -t localhost/my-image:functional-650494 testdata/build --alsologtostderr: (3.300534077s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-650494 image build -t localhost/my-image:functional-650494 testdata/build --alsologtostderr:
I0916 23:56:45.411942  719820 out.go:360] Setting OutFile to fd 1 ...
I0916 23:56:45.412234  719820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:45.412255  719820 out.go:374] Setting ErrFile to fd 2...
I0916 23:56:45.412262  719820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0916 23:56:45.412441  719820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
I0916 23:56:45.413306  719820 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:45.414086  719820 config.go:182] Loaded profile config "functional-650494": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0916 23:56:45.414662  719820 cli_runner.go:164] Run: docker container inspect functional-650494 --format={{.State.Status}}
I0916 23:56:45.435339  719820 ssh_runner.go:195] Run: systemctl --version
I0916 23:56:45.435406  719820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-650494
I0916 23:56:45.456262  719820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/functional-650494/id_rsa Username:docker}
I0916 23:56:45.556548  719820 build_images.go:161] Building image from path: /tmp/build.386953058.tar
I0916 23:56:45.556662  719820 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 23:56:45.569308  719820 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.386953058.tar
I0916 23:56:45.573765  719820 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.386953058.tar: stat -c "%s %y" /var/lib/minikube/build/build.386953058.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.386953058.tar': No such file or directory
I0916 23:56:45.573791  719820 ssh_runner.go:362] scp /tmp/build.386953058.tar --> /var/lib/minikube/build/build.386953058.tar (3072 bytes)
I0916 23:56:45.606621  719820 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.386953058
I0916 23:56:45.618550  719820 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.386953058 -xf /var/lib/minikube/build/build.386953058.tar
I0916 23:56:45.630467  719820 docker.go:361] Building image: /var/lib/minikube/build/build.386953058
I0916 23:56:45.630553  719820 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-650494 /var/lib/minikube/build/build.386953058
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:62313aa3ca027516391eee25a8e51b04d6f6e989ffe34acec7e5eeaac748a1a0 done
#8 naming to localhost/my-image:functional-650494 done
#8 DONE 0.0s
I0916 23:56:48.625858  719820 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-650494 /var/lib/minikube/build/build.386953058: (2.995271308s)
I0916 23:56:48.625987  719820 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.386953058
I0916 23:56:48.638982  719820 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.386953058.tar
I0916 23:56:48.650582  719820 build_images.go:217] Built localhost/my-image:functional-650494 from /tmp/build.386953058.tar
I0916 23:56:48.650618  719820 build_images.go:133] succeeded building to: functional-650494
I0916 23:56:48.650625  719820 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.833498802s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-650494
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-650494 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-650494 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xzz5w" [0ecbf3e7-3b70-4186-9190-8b73e500022e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-xzz5w" [0ecbf3e7-3b70-4186-9190-8b73e500022e] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.003767746s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-650494 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-650494 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-650494 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 713391: os: process already finished
helpers_test.go:525: unable to kill pid 713219: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-650494 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-650494 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-650494 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [19bfeee1-b382-4231-93ed-b2fa892f357e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [19bfeee1-b382-4231-93ed-b2fa892f357e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.003590514s
I0916 23:56:36.118172  665399 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image load --daemon kicbase/echo-server:functional-650494 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image load --daemon kicbase/echo-server:functional-650494 --alsologtostderr
E0916 23:56:17.236518  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-650494
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image load --daemon kicbase/echo-server:functional-650494 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image save kicbase/echo-server:functional-650494 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image rm kicbase/echo-server:functional-650494 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-650494
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 image save --daemon kicbase/echo-server:functional-650494 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-650494
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 service list -o json
functional_test.go:1504: Took "534.035703ms" to run "out/minikube-linux-amd64 -p functional-650494 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32625
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32625
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-650494 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.99.228 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-650494 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdany-port3692117687/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758066997216866808" to /tmp/TestFunctionalparallelMountCmdany-port3692117687/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758066997216866808" to /tmp/TestFunctionalparallelMountCmdany-port3692117687/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758066997216866808" to /tmp/TestFunctionalparallelMountCmdany-port3692117687/001/test-1758066997216866808
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (289.203728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0916 23:56:37.506419  665399 retry.go:31] will retry after 288.837175ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 23:56 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 23:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 23:56 test-1758066997216866808
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh cat /mount-9p/test-1758066997216866808
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-650494 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3cbaeadb-e130-4e00-bb86-86e7da2eae95] Pending
helpers_test.go:352: "busybox-mount" [3cbaeadb-e130-4e00-bb86-86e7da2eae95] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3cbaeadb-e130-4e00-bb86-86e7da2eae95] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3cbaeadb-e130-4e00-bb86-86e7da2eae95] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003604249s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-650494 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdany-port3692117687/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "333.898474ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "51.831884ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "330.46388ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "51.248679ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdspecific-port547757372/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.452029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0916 23:56:45.122169  665399 retry.go:31] will retry after 657.195056ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdspecific-port547757372/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 ssh "sudo umount -f /mount-9p": exit status 1 (300.776415ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-650494 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdspecific-port547757372/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1969998897/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1969998897/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1969998897/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T" /mount1: exit status 1 (359.60138ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0916 23:56:47.255902  665399 retry.go:31] will retry after 426.029302ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-650494 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-650494 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1969998897/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1969998897/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-650494 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1969998897/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-650494
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-650494
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-650494
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (277.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0916 23:56:58.197835  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0916 23:58:20.122489  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:00:36.250582  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:03.964520  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:13.667243  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:13.673631  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:13.685024  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:13.706437  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:13.747930  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:13.829392  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:13.990730  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:14.312215  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:14.954120  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:16.236349  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:18.798123  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:23.919987  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:01:34.162071  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (4m36.643490035s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (277.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-198834 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (21.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-198834 stop --alsologtostderr -v 5: (21.59688425s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198834 status --alsologtostderr -v 5: exit status 7 (104.939282ms)

                                                
                                                
-- stdout --
	ha-198834
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-198834-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-198834-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:14:51.880236  790970 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:14:51.880511  790970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:51.880522  790970 out.go:374] Setting ErrFile to fd 2...
	I0917 00:14:51.880526  790970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:14:51.880700  790970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:14:51.880891  790970 out.go:368] Setting JSON to false
	I0917 00:14:51.880925  790970 mustload.go:65] Loading cluster: ha-198834
	I0917 00:14:51.881067  790970 notify.go:220] Checking for updates...
	I0917 00:14:51.881505  790970 config.go:182] Loaded profile config "ha-198834": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:14:51.881535  790970 status.go:174] checking status of ha-198834 ...
	I0917 00:14:51.882110  790970 cli_runner.go:164] Run: docker container inspect ha-198834 --format={{.State.Status}}
	I0917 00:14:51.902253  790970 status.go:371] ha-198834 host status = "Stopped" (err=<nil>)
	I0917 00:14:51.902274  790970 status.go:384] host is not running, skipping remaining checks
	I0917 00:14:51.902280  790970 status.go:176] ha-198834 status: &{Name:ha-198834 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:14:51.902335  790970 status.go:174] checking status of ha-198834-m02 ...
	I0917 00:14:51.902590  790970 cli_runner.go:164] Run: docker container inspect ha-198834-m02 --format={{.State.Status}}
	I0917 00:14:51.920977  790970 status.go:371] ha-198834-m02 host status = "Stopped" (err=<nil>)
	I0917 00:14:51.920996  790970 status.go:384] host is not running, skipping remaining checks
	I0917 00:14:51.921014  790970 status.go:176] ha-198834-m02 status: &{Name:ha-198834-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:14:51.921032  790970 status.go:174] checking status of ha-198834-m04 ...
	I0917 00:14:51.921293  790970 cli_runner.go:164] Run: docker container inspect ha-198834-m04 --format={{.State.Status}}
	I0917 00:14:51.937966  790970 status.go:371] ha-198834-m04 host status = "Stopped" (err=<nil>)
	I0917 00:14:51.938001  790970 status.go:384] host is not running, skipping remaining checks
	I0917 00:14:51.938011  790970 status.go:176] ha-198834-m04 status: &{Name:ha-198834-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (21.70s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-085757 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-085757 --driver=docker  --container-runtime=docker: (22.35399195s)
--- PASS: TestImageBuild/serial/Setup (22.35s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-085757
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-085757: (1.083762257s)
--- PASS: TestImageBuild/serial/NormalBuild (1.08s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-085757
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-085757
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-085757
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.50s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-616246 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-616246 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (36.380104111s)
--- PASS: TestJSONOutput/start/Command (36.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-616246 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-616246 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-616246 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-616246 --output=json --user=testUser: (10.802498281s)
--- PASS: TestJSONOutput/stop/Command (10.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-157364 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-157364 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.603028ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a36d909-c26d-488d-a1b1-0320bc555b5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-157364] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66b33ed6-99e7-42bd-ab8c-baef86d17183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"54032346-7e9a-4638-8bc7-02e1f907b0bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9d7f0b3a-9505-4b0e-ad6f-96ca1019611f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig"}}
	{"specversion":"1.0","id":"82af0fdd-b97e-450f-a057-a0d06aca542b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube"}}
	{"specversion":"1.0","id":"65689cdc-10dd-428a-854a-187833376582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3018ff8b-734f-448f-abdd-b4a40f5a7628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"76a8def1-aedb-4975-9a40-3547fa326d03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-157364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-157364
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-269864 --network=
E0917 00:28:39.330304  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-269864 --network=: (21.539720827s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-269864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-269864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-269864: (2.107975738s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.67s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-997807 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-997807 --network=bridge: (21.307668197s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-997807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-997807
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-997807: (1.945991324s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.27s)

                                                
                                    
x
+
TestKicExistingNetwork (24.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0917 00:29:12.277807  665399 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0917 00:29:12.293783  665399 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0917 00:29:12.293855  665399 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0917 00:29:12.293889  665399 cli_runner.go:164] Run: docker network inspect existing-network
W0917 00:29:12.309532  665399 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0917 00:29:12.309573  665399 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0917 00:29:12.309597  665399 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0917 00:29:12.309779  665399 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0917 00:29:12.326827  665399 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab651df73000 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:63:f8:73:0d:ee} reservation:<nil>}
I0917 00:29:12.327320  665399 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d27130}
I0917 00:29:12.327348  665399 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0917 00:29:12.327392  665399 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0917 00:29:12.385163  665399 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-189558 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-189558 --network=existing-network: (22.428216835s)
helpers_test.go:175: Cleaning up "existing-network-189558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-189558
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-189558: (1.917956573s)
I0917 00:29:36.748090  665399 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.49s)

                                                
                                    
x
+
TestKicCustomSubnet (24.04s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-598175 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-598175 --subnet=192.168.60.0/24: (21.900850015s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-598175 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-598175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-598175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-598175: (2.121518924s)
--- PASS: TestKicCustomSubnet (24.04s)

                                                
                                    
x
+
TestKicStaticIP (23.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-160072 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-160072 --static-ip=192.168.200.200: (21.628420194s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-160072 ip
helpers_test.go:175: Cleaning up "static-ip-160072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-160072
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-160072: (2.105123616s)
--- PASS: TestKicStaticIP (23.87s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-221761 --driver=docker  --container-runtime=docker
E0917 00:30:36.251106  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-221761 --driver=docker  --container-runtime=docker: (22.186806468s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-237014 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-237014 --driver=docker  --container-runtime=docker: (24.0604899s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-221761
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-237014
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-237014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-237014
E0917 00:31:13.667208  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-237014: (2.12756913s)
helpers_test.go:175: Cleaning up "first-221761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-221761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-221761: (2.132538285s)
--- PASS: TestMinikubeProfile (51.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-947705 --memory=3072 --mount-string /tmp/TestMountStartserial1561136929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-947705 --memory=3072 --mount-string /tmp/TestMountStartserial1561136929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.117006557s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-947705 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-964765 --memory=3072 --mount-string /tmp/TestMountStartserial1561136929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-964765 --memory=3072 --mount-string /tmp/TestMountStartserial1561136929/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.730919457s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964765 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.52s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-947705 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-947705 --alsologtostderr -v=5: (1.517080272s)
--- PASS: TestMountStart/serial/DeleteFirst (1.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964765 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-964765
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-964765: (1.181414514s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-964765
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-964765: (8.519849356s)
--- PASS: TestMountStart/serial/RestartStopped (9.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-964765 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (58.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849943 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849943 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (57.932056867s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (58.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (41.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-849943 -- rollout status deployment/busybox: (3.330920608s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0917 00:32:51.149031  665399 retry.go:31] will retry after 630.757565ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0917 00:32:51.895253  665399 retry.go:31] will retry after 2.218929285s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0917 00:32:54.232694  665399 retry.go:31] will retry after 1.436776971s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0917 00:32:55.784569  665399 retry.go:31] will retry after 3.093974611s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0917 00:32:58.995314  665399 retry.go:31] will retry after 3.293629264s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0917 00:33:02.408125  665399 retry.go:31] will retry after 8.918130961s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0917 00:33:11.446813  665399 retry.go:31] will retry after 15.89655264s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-l2ksb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-ph292 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-l2ksb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-ph292 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-l2ksb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-ph292 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (41.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-l2ksb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-l2ksb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-ph292 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-849943 -- exec busybox-7b57f96db7-ph292 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (13.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-849943 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-849943 -v=5 --alsologtostderr: (13.022831698s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (13.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-849943 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp testdata/cp-test.txt multinode-849943:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3354965188/001/cp-test_multinode-849943.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943:/home/docker/cp-test.txt multinode-849943-m02:/home/docker/cp-test_multinode-849943_multinode-849943-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m02 "sudo cat /home/docker/cp-test_multinode-849943_multinode-849943-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943:/home/docker/cp-test.txt multinode-849943-m03:/home/docker/cp-test_multinode-849943_multinode-849943-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m03 "sudo cat /home/docker/cp-test_multinode-849943_multinode-849943-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp testdata/cp-test.txt multinode-849943-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3354965188/001/cp-test_multinode-849943-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943-m02:/home/docker/cp-test.txt multinode-849943:/home/docker/cp-test_multinode-849943-m02_multinode-849943.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943 "sudo cat /home/docker/cp-test_multinode-849943-m02_multinode-849943.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943-m02:/home/docker/cp-test.txt multinode-849943-m03:/home/docker/cp-test_multinode-849943-m02_multinode-849943-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m03 "sudo cat /home/docker/cp-test_multinode-849943-m02_multinode-849943-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp testdata/cp-test.txt multinode-849943-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3354965188/001/cp-test_multinode-849943-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943-m03:/home/docker/cp-test.txt multinode-849943:/home/docker/cp-test_multinode-849943-m03_multinode-849943.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943 "sudo cat /home/docker/cp-test_multinode-849943-m03_multinode-849943.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 cp multinode-849943-m03:/home/docker/cp-test.txt multinode-849943-m02:/home/docker/cp-test_multinode-849943-m03_multinode-849943-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 ssh -n multinode-849943-m02 "sudo cat /home/docker/cp-test_multinode-849943-m03_multinode-849943-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-849943 node stop m03: (1.228007988s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849943 status: exit status 7 (471.327752ms)

                                                
                                                
-- stdout --
	multinode-849943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-849943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-849943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr: exit status 7 (476.74584ms)

                                                
                                                
-- stdout --
	multinode-849943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-849943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-849943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:33:55.237234  875741 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:33:55.237339  875741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:33:55.237351  875741 out.go:374] Setting ErrFile to fd 2...
	I0917 00:33:55.237357  875741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:33:55.237576  875741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:33:55.237790  875741 out.go:368] Setting JSON to false
	I0917 00:33:55.237814  875741 mustload.go:65] Loading cluster: multinode-849943
	I0917 00:33:55.237895  875741 notify.go:220] Checking for updates...
	I0917 00:33:55.238314  875741 config.go:182] Loaded profile config "multinode-849943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:33:55.238337  875741 status.go:174] checking status of multinode-849943 ...
	I0917 00:33:55.238830  875741 cli_runner.go:164] Run: docker container inspect multinode-849943 --format={{.State.Status}}
	I0917 00:33:55.256075  875741 status.go:371] multinode-849943 host status = "Running" (err=<nil>)
	I0917 00:33:55.256100  875741 host.go:66] Checking if "multinode-849943" exists ...
	I0917 00:33:55.256383  875741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-849943
	I0917 00:33:55.274193  875741 host.go:66] Checking if "multinode-849943" exists ...
	I0917 00:33:55.274444  875741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:55.274481  875741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-849943
	I0917 00:33:55.291772  875741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/multinode-849943/id_rsa Username:docker}
	I0917 00:33:55.385454  875741 ssh_runner.go:195] Run: systemctl --version
	I0917 00:33:55.389962  875741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:55.402005  875741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:33:55.456533  875741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-09-17 00:33:55.446492605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:33:55.457103  875741 kubeconfig.go:125] found "multinode-849943" server: "https://192.168.67.2:8443"
	I0917 00:33:55.457134  875741 api_server.go:166] Checking apiserver status ...
	I0917 00:33:55.457169  875741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:55.469549  875741 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2228/cgroup
	W0917 00:33:55.480070  875741 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2228/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:55.480125  875741 ssh_runner.go:195] Run: ls
	I0917 00:33:55.483926  875741 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 00:33:55.488252  875741 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 00:33:55.488278  875741 status.go:463] multinode-849943 apiserver status = Running (err=<nil>)
	I0917 00:33:55.488302  875741 status.go:176] multinode-849943 status: &{Name:multinode-849943 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:33:55.488321  875741 status.go:174] checking status of multinode-849943-m02 ...
	I0917 00:33:55.488559  875741 cli_runner.go:164] Run: docker container inspect multinode-849943-m02 --format={{.State.Status}}
	I0917 00:33:55.505462  875741 status.go:371] multinode-849943-m02 host status = "Running" (err=<nil>)
	I0917 00:33:55.505488  875741 host.go:66] Checking if "multinode-849943-m02" exists ...
	I0917 00:33:55.505731  875741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-849943-m02
	I0917 00:33:55.522474  875741 host.go:66] Checking if "multinode-849943-m02" exists ...
	I0917 00:33:55.522798  875741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:55.522837  875741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-849943-m02
	I0917 00:33:55.539953  875741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32920 SSHKeyPath:/home/jenkins/minikube-integration/21550-661878/.minikube/machines/multinode-849943-m02/id_rsa Username:docker}
	I0917 00:33:55.632960  875741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:55.644556  875741 status.go:176] multinode-849943-m02 status: &{Name:multinode-849943-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:33:55.644593  875741 status.go:174] checking status of multinode-849943-m03 ...
	I0917 00:33:55.644945  875741 cli_runner.go:164] Run: docker container inspect multinode-849943-m03 --format={{.State.Status}}
	I0917 00:33:55.662168  875741 status.go:371] multinode-849943-m03 host status = "Stopped" (err=<nil>)
	I0917 00:33:55.662190  875741 status.go:384] host is not running, skipping remaining checks
	I0917 00:33:55.662196  875741 status.go:176] multinode-849943-m03 status: &{Name:multinode-849943-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-849943 node start m03 -v=5 --alsologtostderr: (8.053672444s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-849943
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-849943
E0917 00:34:16.735574  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-849943: (22.581323677s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849943 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849943 --wait=true -v=5 --alsologtostderr: (50.829147032s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-849943
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-849943 node delete m03: (4.602971613s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 stop
E0917 00:35:36.251084  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-849943 stop: (21.416069624s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849943 status: exit status 7 (86.379636ms)

                                                
                                                
-- stdout --
	multinode-849943
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-849943-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr: exit status 7 (84.389734ms)

                                                
                                                
-- stdout --
	multinode-849943
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-849943-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:35:44.635135  890031 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:35:44.635249  890031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:35:44.635260  890031 out.go:374] Setting ErrFile to fd 2...
	I0917 00:35:44.635266  890031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:35:44.635516  890031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-661878/.minikube/bin
	I0917 00:35:44.635733  890031 out.go:368] Setting JSON to false
	I0917 00:35:44.635759  890031 mustload.go:65] Loading cluster: multinode-849943
	I0917 00:35:44.635879  890031 notify.go:220] Checking for updates...
	I0917 00:35:44.636309  890031 config.go:182] Loaded profile config "multinode-849943": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0917 00:35:44.636341  890031 status.go:174] checking status of multinode-849943 ...
	I0917 00:35:44.636927  890031 cli_runner.go:164] Run: docker container inspect multinode-849943 --format={{.State.Status}}
	I0917 00:35:44.654552  890031 status.go:371] multinode-849943 host status = "Stopped" (err=<nil>)
	I0917 00:35:44.654577  890031 status.go:384] host is not running, skipping remaining checks
	I0917 00:35:44.654585  890031 status.go:176] multinode-849943 status: &{Name:multinode-849943 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:35:44.654614  890031 status.go:174] checking status of multinode-849943-m02 ...
	I0917 00:35:44.654935  890031 cli_runner.go:164] Run: docker container inspect multinode-849943-m02 --format={{.State.Status}}
	I0917 00:35:44.672078  890031 status.go:371] multinode-849943-m02 host status = "Stopped" (err=<nil>)
	I0917 00:35:44.672107  890031 status.go:384] host is not running, skipping remaining checks
	I0917 00:35:44.672114  890031 status.go:176] multinode-849943-m02 status: &{Name:multinode-849943-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849943 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 00:36:13.667268  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849943 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (46.08944911s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-849943 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-849943
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849943-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-849943-m02 --driver=docker  --container-runtime=docker: exit status 14 (67.102752ms)

                                                
                                                
-- stdout --
	* [multinode-849943-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-849943-m02' is duplicated with machine name 'multinode-849943-m02' in profile 'multinode-849943'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-849943-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-849943-m03 --driver=docker  --container-runtime=docker: (23.252443888s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-849943
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-849943: exit status 80 (284.753955ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-849943 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-849943-m03 already exists in multinode-849943-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-849943-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-849943-m03: (2.134283157s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.79s)

                                                
                                    
x
+
TestPreload (99.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-773136 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-773136 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (44.516979984s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-773136 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-773136 image pull gcr.io/k8s-minikube/busybox: (2.14671354s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-773136
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-773136: (10.773307254s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-773136 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-773136 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (39.959227243s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-773136 image list
helpers_test.go:175: Cleaning up "test-preload-773136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-773136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-773136: (2.201286357s)
--- PASS: TestPreload (99.81s)

                                                
                                    
x
+
TestScheduledStopUnix (95.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-639983 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-639983 --memory=3072 --driver=docker  --container-runtime=docker: (22.152259797s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639983 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-639983 -n scheduled-stop-639983
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639983 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0917 00:39:03.482838  665399 retry.go:31] will retry after 132.992µs: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.483997  665399 retry.go:31] will retry after 154.176µs: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.485215  665399 retry.go:31] will retry after 123.402µs: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.486382  665399 retry.go:31] will retry after 255.699µs: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.487530  665399 retry.go:31] will retry after 260.472µs: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.488677  665399 retry.go:31] will retry after 618.949µs: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.489855  665399 retry.go:31] will retry after 1.356231ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.492057  665399 retry.go:31] will retry after 1.468146ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.494292  665399 retry.go:31] will retry after 1.880138ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.496506  665399 retry.go:31] will retry after 4.903494ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.501761  665399 retry.go:31] will retry after 5.086151ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.506959  665399 retry.go:31] will retry after 12.323034ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.520226  665399 retry.go:31] will retry after 19.300853ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.540478  665399 retry.go:31] will retry after 18.843887ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
I0917 00:39:03.559860  665399 retry.go:31] will retry after 15.507527ms: open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/scheduled-stop-639983/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639983 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-639983 -n scheduled-stop-639983
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-639983
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-639983 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-639983
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-639983: exit status 7 (68.877991ms)

                                                
                                                
-- stdout --
	scheduled-stop-639983
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-639983 -n scheduled-stop-639983
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-639983 -n scheduled-stop-639983: exit status 7 (66.322138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-639983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-639983
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-639983: (1.661611107s)
--- PASS: TestScheduledStopUnix (95.14s)

                                                
                                    
x
+
TestSkaffold (80.71s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1342202223 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-923989 --memory=3072 --driver=docker  --container-runtime=docker
E0917 00:40:36.251499  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-923989 --memory=3072 --driver=docker  --container-runtime=docker: (23.022585575s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1342202223 run --minikube-profile skaffold-923989 --kube-context skaffold-923989 --status-check=true --port-forward=false --interactive=false
E0917 00:41:13.666922  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1342202223 run --minikube-profile skaffold-923989 --kube-context skaffold-923989 --status-check=true --port-forward=false --interactive=false: (39.742210715s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-78ffc7c869-jsx2t" [7b6f2b43-fe06-4c03-ae27-e2013e0587a8] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004007694s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-686489595f-rhplh" [69f75d0e-f379-4564-9268-3aadcf3a892d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003739279s
helpers_test.go:175: Cleaning up "skaffold-923989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-923989
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-923989: (3.260348528s)
--- PASS: TestSkaffold (80.71s)

                                                
                                    
x
+
TestInsufficientStorage (9.89s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-048335 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-048335 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.671411604s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e6d64dd-ce68-4bce-867a-1170444a96fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-048335] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"04070067-f0df-48a9-8b24-cb4b73630beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"619f8f71-2b06-4a11-bfed-9b4a17684135","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e50c42ed-740c-44d1-a724-6ed26658cc1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig"}}
	{"specversion":"1.0","id":"c9547dbd-fb91-4087-9c70-e84276d5a409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube"}}
	{"specversion":"1.0","id":"8b592bed-7626-4904-a0d8-6714e2af40b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"357d82c0-f660-46c6-920b-c4c57c83a0b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fef262e2-f36a-4ebd-9a6f-b2ab9cb9646c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"72588880-f512-475f-a1ec-2098fb5727e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f2d6e9bf-ae2d-4aa6-8a49-231fe4cedccc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"34b43c81-f361-4d74-95c6-984266dc0bfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1d760de9-497a-4f98-8754-110dffe14652","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-048335\" primary control-plane node in \"insufficient-storage-048335\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c6940bf-52b9-4530-be7e-9cfe102f08c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0486df62-efd4-4e93-ad28-ba70721b3a67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"75c78991-ba71-43c7-942e-affe8be5f26c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-048335 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-048335 --output=json --layout=cluster: exit status 7 (266.627985ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-048335","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-048335","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:41:44.700422  928140 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-048335" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-048335 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-048335 --output=json --layout=cluster: exit status 7 (269.377461ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-048335","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-048335","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 00:41:44.970342  928245 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-048335" does not appear in /home/jenkins/minikube-integration/21550-661878/kubeconfig
	E0917 00:41:44.981517  928245 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/insufficient-storage-048335/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-048335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-048335
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-048335: (1.676835223s)
--- PASS: TestInsufficientStorage (9.89s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1736472330 start -p running-upgrade-483898 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1736472330 start -p running-upgrade-483898 --memory=3072 --vm-driver=docker  --container-runtime=docker: (27.73081663s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-483898 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-483898 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.657234608s)
helpers_test.go:175: Cleaning up "running-upgrade-483898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-483898
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-483898: (2.224057609s)
--- PASS: TestRunningBinaryUpgrade (74.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (341.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.197757785s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-401604
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-401604: (11.986199439s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-401604 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-401604 status --format={{.Host}}: exit status 7 (88.253122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m28.851577449s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-401604 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (70.465545ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-401604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-401604
	    minikube start -p kubernetes-upgrade-401604 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4016042 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-401604 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-401604 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.946349294s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-401604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-401604
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-401604: (2.781200808s)
--- PASS: TestKubernetesUpgrade (341.98s)

                                                
                                    
x
+
TestMissingContainerUpgrade (108.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1283814273 start -p missing-upgrade-536242 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1283814273 start -p missing-upgrade-536242 --memory=3072 --driver=docker  --container-runtime=docker: (51.299096073s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-536242
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-536242: (10.407981582s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-536242
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-536242 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-536242 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.458932737s)
helpers_test.go:175: Cleaning up "missing-upgrade-536242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-536242
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-536242: (2.394136032s)
--- PASS: TestMissingContainerUpgrade (108.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-500892 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-500892 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (71.542372ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-500892] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-661878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-661878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-500892 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-500892 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.669507727s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-500892 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (80.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1007837048 start -p stopped-upgrade-524134 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1007837048 start -p stopped-upgrade-524134 --memory=3072 --vm-driver=docker  --container-runtime=docker: (52.605739451s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1007837048 -p stopped-upgrade-524134 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1007837048 -p stopped-upgrade-524134 stop: (10.833745866s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-524134 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-524134 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (16.664292199s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (80.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-500892 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-500892 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.110715313s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-500892 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-500892 status -o json: exit status 2 (308.121799ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-500892","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-500892
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-500892: (2.332950137s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-500892 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-500892 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (8.244106866s)
--- PASS: TestNoKubernetes/serial/Start (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-500892 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-500892 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.27661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-500892
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-500892: (1.210935734s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-500892 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-500892 --driver=docker  --container-runtime=docker: (8.825647496s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-500892 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-500892 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.366558ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-524134
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestPause/serial/Start (49.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-734329 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-734329 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (49.461148179s)
--- PASS: TestPause/serial/Start (49.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-734329 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-734329 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.858677955s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.88s)

                                                
                                    
x
+
TestPause/serial/Pause (0.46s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-734329 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.46s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-734329 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-734329 --output=json --layout=cluster: exit status 2 (314.54271ms)

                                                
                                                
-- stdout --
	{"Name":"pause-734329","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-734329","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-734329 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-734329 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-734329 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-734329 --alsologtostderr -v=5: (2.196862541s)
--- PASS: TestPause/serial/DeletePaused (2.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.183248447s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-734329
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-734329: exit status 1 (16.561503ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-734329: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (40.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-591839 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0917 00:45:19.332097  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-591839 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (40.741881689s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (40.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-152605 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0917 00:45:36.250806  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-152605 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (51.417808305s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-591839 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [437c42e3-633f-441e-abad-9d2b1dae1755] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [437c42e3-633f-441e-abad-9d2b1dae1755] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003842538s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-591839 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-591839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-591839 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-591839 --alsologtostderr -v=3
E0917 00:46:13.666234  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-591839 --alsologtostderr -v=3: (10.853465259s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-591839 -n old-k8s-version-591839
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-591839 -n old-k8s-version-591839: exit status 7 (79.722266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-591839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (122.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-591839 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0917 00:46:22.491072  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:22.497455  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:22.508869  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:22.530290  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:22.571705  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:22.653466  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:22.815598  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:23.137757  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:23.779655  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:25.061724  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-591839 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (2m1.665564294s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-591839 -n old-k8s-version-591839
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (122.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-152605 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [71250864-6039-414e-9ada-26c9656d5ca3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0917 00:46:27.623825  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [71250864-6039-414e-9ada-26c9656d5ca3] Running
E0917 00:46:32.745214  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004129505s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-152605 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-152605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-152605 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-152605 --alsologtostderr -v=3
E0917 00:46:42.986977  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-152605 --alsologtostderr -v=3: (10.827380437s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152605 -n no-preload-152605
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152605 -n no-preload-152605: exit status 7 (76.416677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-152605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (157.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-152605 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0917 00:47:03.468933  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:47:44.431204  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-152605 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (2m36.940909978s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-152605 -n no-preload-152605
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (157.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h7lr8" [6e4bf37c-3a32-4363-807d-177c52b74637] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004337758s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h7lr8" [6e4bf37c-3a32-4363-807d-177c52b74637] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.075152213s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-591839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-411882 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-411882 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m6.010318419s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-990042 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-990042 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m8.64117435s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-591839 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-591839 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-591839 --alsologtostderr -v=1: (1.106121856s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-591839 -n old-k8s-version-591839
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-591839 -n old-k8s-version-591839: exit status 2 (354.524664ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-591839 -n old-k8s-version-591839
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-591839 -n old-k8s-version-591839: exit status 2 (392.118818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-591839 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-591839 -n old-k8s-version-591839
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-591839 -n old-k8s-version-591839
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0917 00:49:06.352638  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (31.3961903s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-131853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-131853 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-131853 --alsologtostderr -v=3: (10.791194301s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-131853 -n newest-cni-131853
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-131853 -n newest-cni-131853: exit status 7 (68.745969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-131853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-131853 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (16.267854174s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-131853 -n newest-cni-131853
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vs29p" [9053b687-b3fb-43e6-bbfc-b044137bc122] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0044042s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vs29p" [9053b687-b3fb-43e6-bbfc-b044137bc122] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00381855s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-152605 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-411882 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bf1f3b90-5c2e-4c4f-ac12-6d3449a7fbce] Pending
helpers_test.go:352: "busybox" [bf1f3b90-5c2e-4c4f-ac12-6d3449a7fbce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bf1f3b90-5c2e-4c4f-ac12-6d3449a7fbce] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00515831s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-411882 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-990042 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [85a58005-57d0-4a14-9fd0-e59631978ef3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [85a58005-57d0-4a14-9fd0-e59631978ef3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004755999s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-990042 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: (dbg) Done: kubectl --context default-k8s-diff-port-990042 exec busybox -- /bin/sh -c "ulimit -n": (1.06616637s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-152605 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-152605 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152605 -n no-preload-152605
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152605 -n no-preload-152605: exit status 2 (354.767533ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-152605 -n no-preload-152605
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-152605 -n no-preload-152605: exit status 2 (335.028798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-152605 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-152605 -n no-preload-152605
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-152605 -n no-preload-152605
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-131853 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-411882 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-411882 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m53.776859966s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-411882 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-411882 --alsologtostderr -v=3: (10.82171664s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-990042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-990042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.215458116s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-990042 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-990042 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-990042 --alsologtostderr -v=3: (10.839344921s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (168.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (2m48.368575774s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (168.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411882 -n embed-certs-411882
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411882 -n embed-certs-411882: exit status 7 (78.956126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-411882 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (26.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-411882 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-411882 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (25.725254075s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-411882 -n embed-certs-411882
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (26.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042: exit status 7 (82.598437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-990042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (97.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-990042 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-990042 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m36.93398473s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (97.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nh5t2" [b74ae05d-9ab6-4d9e-9adf-6af71b143dd8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004599164s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nh5t2" [b74ae05d-9ab6-4d9e-9adf-6af71b143dd8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004270471s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-411882 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-411882 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-411882 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411882 -n embed-certs-411882
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411882 -n embed-certs-411882: exit status 2 (311.661144ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411882 -n embed-certs-411882
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411882 -n embed-certs-411882: exit status 2 (309.715751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-411882 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-411882 -n embed-certs-411882
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-411882 -n embed-certs-411882
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0917 00:50:36.250517  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:56.737969  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:57.524508  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:57.531075  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:57.542493  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:57.563872  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:57.605358  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:57.686783  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:57.848317  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:58.170369  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:58.812172  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:00.094320  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:02.655643  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:07.777918  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:13.667198  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:18.019294  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:22.490680  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.149141  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.155560  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.166980  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.188396  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.229791  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.311231  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.472812  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:27.794603  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:28.436746  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:29.718062  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:32.280179  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m19.06634449s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h4xvq" [54e00ed2-b07d-4246-83fa-b9e4d9c7713b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003727546s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-656031 "pgrep -a kubelet"
I0917 00:51:36.154301  665399 config.go:182] Loaded profile config "auto-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-656031 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qrbwn" [8eeff4ab-f1e3-4613-b65d-debe5047d399] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 00:51:37.401659  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:38.500799  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qrbwn" [8eeff4ab-f1e3-4613-b65d-debe5047d399] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004621979s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h4xvq" [54e00ed2-b07d-4246-83fa-b9e4d9c7713b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00389171s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-990042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-990042 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-656031 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-990042 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042: exit status 2 (323.645842ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042
E0917 00:51:47.643958  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042: exit status 2 (295.073407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-990042 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-990042 -n default-k8s-diff-port-990042
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.43s)
E0917 00:57:37.810016  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kindnet-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:57:39.091462  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kindnet-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:57:41.654113  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kindnet-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m4.175724685s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fdl9f" [cc4f5434-e6e9-4bd8-8ca4-d1cfd7b85fbd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004089979s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-656031 "pgrep -a kubelet"
I0917 00:52:01.488060  665399 config.go:182] Loaded profile config "flannel-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-656031 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6kjnk" [962bce9f-6091-4c9a-80da-ee4f9b958dc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6kjnk" [962bce9f-6091-4c9a-80da-ee4f9b958dc2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.00445045s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-656031 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-j8p84" [d0ca422c-1bb5-4f00-bddc-f717cabe1a77] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003367492s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-656031 "pgrep -a kubelet"
I0917 00:52:42.802371  665399 config.go:182] Loaded profile config "kindnet-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-656031 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jqmr9" [08df52ef-0942-463e-a8f0-f900af67ccbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jqmr9" [08df52ef-0942-463e-a8f0-f900af67ccbe] Running
E0917 00:52:49.087819  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004137995s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-656031 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-656031 "pgrep -a kubelet"
I0917 00:52:56.048823  665399 config.go:182] Loaded profile config "enable-default-cni-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-656031 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sl4t4" [7ff3e776-c53c-41a1-b6da-30741e8d52cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sl4t4" [7ff3e776-c53c-41a1-b6da-30741e8d52cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004363113s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-656031 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (355.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0917 00:53:41.384177  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:11.009366  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:35.596921  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:35.603335  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:35.614740  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:35.636097  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:35.677576  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:35.759045  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:35.920739  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:36.242432  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:36.884640  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:38.166072  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:40.727457  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:45.849144  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:56.091223  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:55:16.572964  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:55:36.250377  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/addons-875427/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:55:57.524750  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:55:57.535196  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:13.666554  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/functional-650494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:22.491152  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/skaffold-923989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:25.226299  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/old-k8s-version-591839/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:27.149961  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/no-preload-152605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.327163  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.333541  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.344901  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.366367  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.407757  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.489252  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.651002  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.972431  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:37.613716  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:38.895434  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:41.457016  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (5m55.806907036s)
--- PASS: TestNetworkPlugins/group/false/Start (355.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0917 00:57:00.324084  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/flannel-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:57:05.445658  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/flannel-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:57:15.687365  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/flannel-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:57:17.302405  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/auto-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:57:19.456570  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/default-k8s-diff-port-990042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-656031 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m4.873474201s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-656031 "pgrep -a kubelet"
I0917 00:58:03.673630  665399 config.go:182] Loaded profile config "custom-flannel-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-656031 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6jpjp" [e53b443b-d7a6-4e78-a0f5-96dd30a2b6fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 00:58:06.476140  665399 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/enable-default-cni-656031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6jpjp" [e53b443b-d7a6-4e78-a0f5-96dd30a2b6fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.002869313s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-656031 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-656031 "pgrep -a kubelet"
I0917 00:59:21.906580  665399 config.go:182] Loaded profile config "false-656031": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-656031 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7lkmc" [cf2feaf2-66f0-445a-81ff-657383471c3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7lkmc" [cf2feaf2-66f0-445a-81ff-657383471c3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 8.003691778s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-656031 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-656031 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    

Test skip (22/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-171805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-171805
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-656031 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-656031" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 00:43:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-401604
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-661878/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 00:43:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-734329
contexts:
- context:
cluster: kubernetes-upgrade-401604
user: kubernetes-upgrade-401604
name: kubernetes-upgrade-401604
- context:
cluster: pause-734329
extensions:
- extension:
last-update: Wed, 17 Sep 2025 00:43:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-734329
name: pause-734329
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-401604
user:
client-certificate: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubernetes-upgrade-401604/client.crt
client-key: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/kubernetes-upgrade-401604/client.key
- name: pause-734329
user:
client-certificate: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/pause-734329/client.crt
client-key: /home/jenkins/minikube-integration/21550-661878/.minikube/profiles/pause-734329/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-656031

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-656031" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-656031"

                                                
                                                
----------------------- debugLogs end: cilium-656031 [took: 3.309220452s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-656031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-656031
--- SKIP: TestNetworkPlugins/group/cilium (3.47s)

                                                
                                    
Copied to clipboard