Test Report: Docker_Linux_docker_arm64 20091

                    
                      6f6ff76044c36bcb4277257fa9dc7e7f34dfce32:2024-12-16:37513
                    
                

Test fail (1/345)

Order failed test Duration
177 TestMultiControlPlane/serial/RestartCluster 109.02
x
+
TestMultiControlPlane/serial/RestartCluster (109.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-082404 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E1216 19:55:14.740539    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-082404 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m44.646885839s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:591: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-082404       Ready      control-plane   10m     v1.32.0
	ha-082404-m02   Ready      control-plane   9m45s   v1.32.0
	ha-082404-m04   NotReady   <none>          8m9s    v1.32.0

                                                
                                                
-- /stdout --
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 Unknown
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-082404
helpers_test.go:235: (dbg) docker inspect ha-082404:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1",
	        "Created": "2024-12-16T19:45:53.934864238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 103886,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-16T19:54:50.589481723Z",
	            "FinishedAt": "2024-12-16T19:54:49.800833038Z"
	        },
	        "Image": "sha256:7cd263f59e19eeefdb79b99186c433854c2243e3d7fa2988b2d817cac7fc54f8",
	        "ResolvConfPath": "/var/lib/docker/containers/df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1/hostname",
	        "HostsPath": "/var/lib/docker/containers/df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1/hosts",
	        "LogPath": "/var/lib/docker/containers/df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1/df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1-json.log",
	        "Name": "/ha-082404",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-082404:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-082404",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bd50fb4651c36616cd0e4597826049bf118bf361421e50a1dc7bb417cc5c40e9-init/diff:/var/lib/docker/overlay2/acc364fe6cd4e3915e2c087c9731511b8036f6f5517ed637cb16c71fff260f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd50fb4651c36616cd0e4597826049bf118bf361421e50a1dc7bb417cc5c40e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd50fb4651c36616cd0e4597826049bf118bf361421e50a1dc7bb417cc5c40e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd50fb4651c36616cd0e4597826049bf118bf361421e50a1dc7bb417cc5c40e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-082404",
	                "Source": "/var/lib/docker/volumes/ha-082404/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-082404",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-082404",
	                "name.minikube.sigs.k8s.io": "ha-082404",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26e8215efe50033d482a3cf80230649afa6ad72069555c764461aa73d989da4b",
	            "SandboxKey": "/var/run/docker/netns/26e8215efe50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-082404": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a15d316ef2180ac97e1a928fbbc2c912357b4f33526d08bdd6091d50fcb70614",
	                    "EndpointID": "c1acd7b1247149870ecf0824a65b79b11e1c73a0672bd91763a9b5f05245e4ce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-082404",
	                        "df79637e07d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-082404 -n ha-082404
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 logs -n 25: (1.689215145s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-082404 cp ha-082404-m03:/home/docker/cp-test.txt                              | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m04:/home/docker/cp-test_ha-082404-m03_ha-082404-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n                                                                 | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n ha-082404-m04 sudo cat                                          | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | /home/docker/cp-test_ha-082404-m03_ha-082404-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-082404 cp testdata/cp-test.txt                                                | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n                                                                 | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt                              | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3791437405/001/cp-test_ha-082404-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n                                                                 | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt                              | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404:/home/docker/cp-test_ha-082404-m04_ha-082404.txt                       |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n                                                                 | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n ha-082404 sudo cat                                              | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | /home/docker/cp-test_ha-082404-m04_ha-082404.txt                                 |           |         |         |                     |                     |
	| cp      | ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt                              | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m02:/home/docker/cp-test_ha-082404-m04_ha-082404-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n                                                                 | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n ha-082404-m02 sudo cat                                          | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | /home/docker/cp-test_ha-082404-m04_ha-082404-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt                              | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m03:/home/docker/cp-test_ha-082404-m04_ha-082404-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n                                                                 | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | ha-082404-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-082404 ssh -n ha-082404-m03 sudo cat                                          | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:48 UTC |
	|         | /home/docker/cp-test_ha-082404-m04_ha-082404-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-082404 node stop m02 -v=7                                                     | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:48 UTC | 16 Dec 24 19:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-082404 node start m02 -v=7                                                    | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:49 UTC | 16 Dec 24 19:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-082404 -v=7                                                           | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-082404 -v=7                                                                | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:49 UTC | 16 Dec 24 19:50 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-082404 --wait=true -v=7                                                    | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:50 UTC | 16 Dec 24 19:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-082404                                                                | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:54 UTC |                     |
	| node    | ha-082404 node delete m03 -v=7                                                   | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:54 UTC | 16 Dec 24 19:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-082404 stop -v=7                                                              | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:54 UTC | 16 Dec 24 19:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-082404 --wait=true                                                         | ha-082404 | jenkins | v1.34.0 | 16 Dec 24 19:54 UTC | 16 Dec 24 19:56 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=docker                                                       |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 19:54:50
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 19:54:50.235257  103685 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:54:50.235605  103685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:54:50.235621  103685 out.go:358] Setting ErrFile to fd 2...
	I1216 19:54:50.235627  103685 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:54:50.235898  103685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 19:54:50.236312  103685 out.go:352] Setting JSON to false
	I1216 19:54:50.237210  103685 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":2236,"bootTime":1734376655,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 19:54:50.237289  103685 start.go:139] virtualization:  
	I1216 19:54:50.240598  103685 out.go:177] * [ha-082404] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 19:54:50.244098  103685 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 19:54:50.244279  103685 notify.go:220] Checking for updates...
	I1216 19:54:50.249718  103685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:54:50.252408  103685 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:54:50.255139  103685 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	I1216 19:54:50.257744  103685 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 19:54:50.260357  103685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 19:54:50.263576  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:54:50.264171  103685 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:54:50.294096  103685 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 19:54:50.294229  103685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:54:50.348245  103685 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-12-16 19:54:50.339273471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:54:50.348362  103685 docker.go:318] overlay module found
	I1216 19:54:50.351393  103685 out.go:177] * Using the docker driver based on existing profile
	I1216 19:54:50.353949  103685 start.go:297] selected driver: docker
	I1216 19:54:50.353971  103685 start.go:901] validating driver "docker" against &{Name:ha-082404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:ha-082404 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.32.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false ku
beflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:54:50.354126  103685 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 19:54:50.354232  103685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:54:50.407087  103685 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-12-16 19:54:50.397788268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:54:50.407591  103685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 19:54:50.407625  103685 cni.go:84] Creating CNI manager for ""
	I1216 19:54:50.407672  103685 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 19:54:50.407726  103685 start.go:340] cluster config:
	{Name:ha-082404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:ha-082404 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:f
alse nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1216 19:54:50.410682  103685 out.go:177] * Starting "ha-082404" primary control-plane node in "ha-082404" cluster
	I1216 19:54:50.413209  103685 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 19:54:50.415916  103685 out.go:177] * Pulling base image v0.0.45-1734029593-20090 ...
	I1216 19:54:50.418524  103685 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 19:54:50.418582  103685 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 19:54:50.418597  103685 cache.go:56] Caching tarball of preloaded images
	I1216 19:54:50.418702  103685 preload.go:172] Found /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 19:54:50.418718  103685 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 19:54:50.418860  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:54:50.419128  103685 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1216 19:54:50.438566  103685 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon, skipping pull
	I1216 19:54:50.438589  103685 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in daemon, skipping load
	I1216 19:54:50.438609  103685 cache.go:194] Successfully downloaded all kic artifacts
	I1216 19:54:50.438632  103685 start.go:360] acquireMachinesLock for ha-082404: {Name:mk4ec7695b5b4eab6f186b464ef40ca9938b783b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 19:54:50.438694  103685 start.go:364] duration metric: took 44.372µs to acquireMachinesLock for "ha-082404"
	I1216 19:54:50.438717  103685 start.go:96] Skipping create...Using existing machine configuration
	I1216 19:54:50.438735  103685 fix.go:54] fixHost starting: 
	I1216 19:54:50.438987  103685 cli_runner.go:164] Run: docker container inspect ha-082404 --format={{.State.Status}}
	I1216 19:54:50.455675  103685 fix.go:112] recreateIfNeeded on ha-082404: state=Stopped err=<nil>
	W1216 19:54:50.455708  103685 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 19:54:50.458876  103685 out.go:177] * Restarting existing docker container for "ha-082404" ...
	I1216 19:54:50.461554  103685 cli_runner.go:164] Run: docker start ha-082404
	I1216 19:54:50.754291  103685 cli_runner.go:164] Run: docker container inspect ha-082404 --format={{.State.Status}}
	I1216 19:54:50.778480  103685 kic.go:430] container "ha-082404" state is running.
	I1216 19:54:50.781122  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404
	I1216 19:54:50.807553  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:54:50.807902  103685 machine.go:93] provisionDockerMachine start ...
	I1216 19:54:50.808019  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:50.835064  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:54:50.835550  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1216 19:54:50.835611  103685 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 19:54:50.836330  103685 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 19:54:53.981301  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-082404
	
	I1216 19:54:53.981327  103685 ubuntu.go:169] provisioning hostname "ha-082404"
	I1216 19:54:53.981391  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:53.998828  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:54:53.999082  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1216 19:54:53.999098  103685 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-082404 && echo "ha-082404" | sudo tee /etc/hostname
	I1216 19:54:54.162240  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-082404
	
	I1216 19:54:54.162318  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:54.181153  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:54:54.181400  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1216 19:54:54.181422  103685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-082404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-082404/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-082404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 19:54:54.326081  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 19:54:54.326112  103685 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20091-2258/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-2258/.minikube}
	I1216 19:54:54.326154  103685 ubuntu.go:177] setting up certificates
	I1216 19:54:54.326164  103685 provision.go:84] configureAuth start
	I1216 19:54:54.326249  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404
	I1216 19:54:54.345500  103685 provision.go:143] copyHostCerts
	I1216 19:54:54.345555  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem
	I1216 19:54:54.345602  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem, removing ...
	I1216 19:54:54.345612  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem
	I1216 19:54:54.345709  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem (1675 bytes)
	I1216 19:54:54.345860  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem
	I1216 19:54:54.345896  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem, removing ...
	I1216 19:54:54.345907  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem
	I1216 19:54:54.345959  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem (1082 bytes)
	I1216 19:54:54.346016  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem
	I1216 19:54:54.346037  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem, removing ...
	I1216 19:54:54.346047  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem
	I1216 19:54:54.346077  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem (1123 bytes)
	I1216 19:54:54.346147  103685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem org=jenkins.ha-082404 san=[127.0.0.1 192.168.49.2 ha-082404 localhost minikube]
	I1216 19:54:54.904241  103685 provision.go:177] copyRemoteCerts
	I1216 19:54:54.904315  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 19:54:54.904365  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:54.921108  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404/id_rsa Username:docker}
	I1216 19:54:55.030019  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 19:54:55.030090  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 19:54:55.057253  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 19:54:55.057324  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1216 19:54:55.082359  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 19:54:55.082445  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 19:54:55.108687  103685 provision.go:87] duration metric: took 782.507179ms to configureAuth
	I1216 19:54:55.108722  103685 ubuntu.go:193] setting minikube options for container-runtime
	I1216 19:54:55.109007  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:54:55.109078  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:55.126596  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:54:55.126855  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1216 19:54:55.126871  103685 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 19:54:55.274532  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 19:54:55.274554  103685 ubuntu.go:71] root file system type: overlay
	I1216 19:54:55.274673  103685 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 19:54:55.274763  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:55.294733  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:54:55.294984  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1216 19:54:55.295071  103685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 19:54:55.454750  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 19:54:55.454860  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:55.472685  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:54:55.472946  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1216 19:54:55.472971  103685 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 19:54:55.622904  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 19:54:55.622929  103685 machine.go:96] duration metric: took 4.814982156s to provisionDockerMachine
	I1216 19:54:55.622941  103685 start.go:293] postStartSetup for "ha-082404" (driver="docker")
	I1216 19:54:55.622970  103685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 19:54:55.623046  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 19:54:55.623097  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:55.641060  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404/id_rsa Username:docker}
	I1216 19:54:55.742856  103685 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 19:54:55.745983  103685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 19:54:55.746022  103685 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 19:54:55.746032  103685 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 19:54:55.746039  103685 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 19:54:55.746050  103685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-2258/.minikube/addons for local assets ...
	I1216 19:54:55.746103  103685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-2258/.minikube/files for local assets ...
	I1216 19:54:55.746185  103685 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> 75692.pem in /etc/ssl/certs
	I1216 19:54:55.746197  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> /etc/ssl/certs/75692.pem
	I1216 19:54:55.746340  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 19:54:55.755138  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem --> /etc/ssl/certs/75692.pem (1708 bytes)
	I1216 19:54:55.779124  103685 start.go:296] duration metric: took 156.167284ms for postStartSetup
	I1216 19:54:55.779250  103685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:54:55.779325  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:55.795868  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404/id_rsa Username:docker}
	I1216 19:54:55.895240  103685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 19:54:55.900217  103685 fix.go:56] duration metric: took 5.461483028s for fixHost
	I1216 19:54:55.900247  103685 start.go:83] releasing machines lock for "ha-082404", held for 5.461540766s
	I1216 19:54:55.900324  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404
	I1216 19:54:55.918466  103685 ssh_runner.go:195] Run: cat /version.json
	I1216 19:54:55.918504  103685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 19:54:55.918519  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:55.918576  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:54:55.942605  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404/id_rsa Username:docker}
	I1216 19:54:55.952264  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404/id_rsa Username:docker}
	I1216 19:54:56.041667  103685 ssh_runner.go:195] Run: systemctl --version
	I1216 19:54:56.181578  103685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 19:54:56.186034  103685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1216 19:54:56.205951  103685 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1216 19:54:56.206045  103685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 19:54:56.216277  103685 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 19:54:56.216345  103685 start.go:495] detecting cgroup driver to use...
	I1216 19:54:56.216385  103685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 19:54:56.216485  103685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 19:54:56.232849  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1216 19:54:56.243392  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 19:54:56.252853  103685 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 19:54:56.252977  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 19:54:56.262606  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 19:54:56.272430  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 19:54:56.282714  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 19:54:56.292801  103685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 19:54:56.302054  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 19:54:56.312528  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 19:54:56.323188  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 19:54:56.333215  103685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 19:54:56.342029  103685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 19:54:56.351817  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:54:56.438998  103685 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 19:54:56.546269  103685 start.go:495] detecting cgroup driver to use...
	I1216 19:54:56.546316  103685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 19:54:56.546367  103685 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 19:54:56.558872  103685 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1216 19:54:56.559013  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 19:54:56.575737  103685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 19:54:56.592717  103685 ssh_runner.go:195] Run: which cri-dockerd
	I1216 19:54:56.596523  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 19:54:56.605046  103685 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1216 19:54:56.626911  103685 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 19:54:56.735467  103685 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 19:54:56.828873  103685 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 19:54:56.828999  103685 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 19:54:56.847620  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:54:56.953368  103685 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 19:54:57.583024  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 19:54:57.594520  103685 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 19:54:57.607339  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 19:54:57.618944  103685 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 19:54:57.698960  103685 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 19:54:57.787136  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:54:57.873932  103685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 19:54:57.887773  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 19:54:57.899001  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:54:57.987177  103685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 19:54:58.078665  103685 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 19:54:58.078785  103685 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 19:54:58.083610  103685 start.go:563] Will wait 60s for crictl version
	I1216 19:54:58.083728  103685 ssh_runner.go:195] Run: which crictl
	I1216 19:54:58.087553  103685 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 19:54:58.143003  103685 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1216 19:54:58.143098  103685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 19:54:58.166668  103685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 19:54:58.192791  103685 out.go:235] * Preparing Kubernetes v1.32.0 on Docker 27.4.0 ...
	I1216 19:54:58.192924  103685 cli_runner.go:164] Run: docker network inspect ha-082404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 19:54:58.208935  103685 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 19:54:58.212582  103685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:54:58.223683  103685 kubeadm.go:883] updating cluster {Name:ha-082404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:ha-082404 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 19:54:58.223844  103685 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 19:54:58.223906  103685 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 19:54:58.243738  103685 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1216 19:54:58.243762  103685 docker.go:619] Images already preloaded, skipping extraction
	I1216 19:54:58.243828  103685 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 19:54:58.263266  103685 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	ghcr.io/kube-vip/kube-vip:v0.8.7
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1216 19:54:58.263292  103685 cache_images.go:84] Images are preloaded, skipping loading
	I1216 19:54:58.263303  103685 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.0 docker true true} ...
	I1216 19:54:58.263406  103685 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-082404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:ha-082404 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 19:54:58.263467  103685 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 19:54:58.318816  103685 cni.go:84] Creating CNI manager for ""
	I1216 19:54:58.318844  103685 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1216 19:54:58.318856  103685 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 19:54:58.318880  103685 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-082404 NodeName:ha-082404 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 19:54:58.319014  103685 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-082404"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 19:54:58.319030  103685 kube-vip.go:115] generating kube-vip config ...
	I1216 19:54:58.319079  103685 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 19:54:58.330966  103685 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 19:54:58.331112  103685 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 19:54:58.331171  103685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 19:54:58.340122  103685 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 19:54:58.340198  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1216 19:54:58.348925  103685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1216 19:54:58.367736  103685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 19:54:58.385764  103685 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1216 19:54:58.404015  103685 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 19:54:58.422355  103685 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 19:54:58.425640  103685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:54:58.436197  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:54:58.515115  103685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 19:54:58.529263  103685 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404 for IP: 192.168.49.2
	I1216 19:54:58.529288  103685 certs.go:194] generating shared ca certs ...
	I1216 19:54:58.529304  103685 certs.go:226] acquiring lock for ca certs: {Name:mk61ac4ce13eccd2c732f8ba869cb043f9f7a744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:54:58.529448  103685 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key
	I1216 19:54:58.529492  103685 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key
	I1216 19:54:58.529504  103685 certs.go:256] generating profile certs ...
	I1216 19:54:58.529580  103685 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.key
	I1216 19:54:58.529611  103685 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key.17c43a0e
	I1216 19:54:58.529635  103685 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt.17c43a0e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1216 19:54:59.106230  103685 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt.17c43a0e ...
	I1216 19:54:59.106312  103685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt.17c43a0e: {Name:mkb5077c176c74589f525fc61df79a62d49e81bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:54:59.106542  103685 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key.17c43a0e ...
	I1216 19:54:59.106585  103685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key.17c43a0e: {Name:mk80e1751f9aa9d1369db3bc6fa2413f9cb2e303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:54:59.106716  103685 certs.go:381] copying /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt.17c43a0e -> /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt
	I1216 19:54:59.106904  103685 certs.go:385] copying /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key.17c43a0e -> /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key
	I1216 19:54:59.107087  103685 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.key
	I1216 19:54:59.107122  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 19:54:59.107155  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 19:54:59.107197  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 19:54:59.107233  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 19:54:59.107263  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 19:54:59.107305  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 19:54:59.107342  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 19:54:59.107373  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 19:54:59.107459  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem (1338 bytes)
	W1216 19:54:59.107517  103685 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569_empty.pem, impossibly tiny 0 bytes
	I1216 19:54:59.107543  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 19:54:59.107596  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem (1082 bytes)
	I1216 19:54:59.107646  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem (1123 bytes)
	I1216 19:54:59.107708  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem (1675 bytes)
	I1216 19:54:59.107813  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem (1708 bytes)
	I1216 19:54:59.107882  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> /usr/share/ca-certificates/75692.pem
	I1216 19:54:59.107925  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:54:59.107957  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem -> /usr/share/ca-certificates/7569.pem
	I1216 19:54:59.108614  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 19:54:59.140083  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 19:54:59.172974  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 19:54:59.205651  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 19:54:59.237280  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 19:54:59.266686  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 19:54:59.291319  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 19:54:59.315312  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 19:54:59.338877  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem --> /usr/share/ca-certificates/75692.pem (1708 bytes)
	I1216 19:54:59.362955  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 19:54:59.387791  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem --> /usr/share/ca-certificates/7569.pem (1338 bytes)
	I1216 19:54:59.412062  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 19:54:59.429383  103685 ssh_runner.go:195] Run: openssl version
	I1216 19:54:59.434899  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 19:54:59.444083  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:54:59.447496  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:54:59.447565  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:54:59.454415  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 19:54:59.463276  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7569.pem && ln -fs /usr/share/ca-certificates/7569.pem /etc/ssl/certs/7569.pem"
	I1216 19:54:59.472588  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7569.pem
	I1216 19:54:59.476401  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/7569.pem
	I1216 19:54:59.476467  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7569.pem
	I1216 19:54:59.483706  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7569.pem /etc/ssl/certs/51391683.0"
	I1216 19:54:59.492978  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75692.pem && ln -fs /usr/share/ca-certificates/75692.pem /etc/ssl/certs/75692.pem"
	I1216 19:54:59.502425  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75692.pem
	I1216 19:54:59.505794  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/75692.pem
	I1216 19:54:59.505929  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75692.pem
	I1216 19:54:59.513172  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75692.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 19:54:59.522065  103685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 19:54:59.525439  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 19:54:59.532467  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 19:54:59.540038  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 19:54:59.546757  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 19:54:59.553560  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 19:54:59.560388  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 19:54:59.567458  103685 kubeadm.go:392] StartCluster: {Name:ha-082404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:ha-082404 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false k
ubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:54:59.567640  103685 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 19:54:59.586026  103685 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 19:54:59.594797  103685 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 19:54:59.594817  103685 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 19:54:59.594887  103685 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 19:54:59.603257  103685 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 19:54:59.603684  103685 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-082404" does not appear in /home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:54:59.603795  103685 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-2258/kubeconfig needs updating (will repair): [kubeconfig missing "ha-082404" cluster setting kubeconfig missing "ha-082404" context setting]
	I1216 19:54:59.604067  103685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-2258/kubeconfig: {Name:mka70734b2114420160cdb9aedbb0d97125ea129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:54:59.604463  103685 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:54:59.604712  103685 kapi.go:59] client config for ha-082404: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.crt", KeyFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.key", CAFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1eafe20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 19:54:59.605356  103685 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 19:54:59.605430  103685 cert_rotation.go:140] Starting client certificate rotation controller
	I1216 19:54:59.614078  103685 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I1216 19:54:59.614104  103685 kubeadm.go:597] duration metric: took 19.280385ms to restartPrimaryControlPlane
	I1216 19:54:59.614131  103685 kubeadm.go:394] duration metric: took 46.681301ms to StartCluster
	I1216 19:54:59.614151  103685 settings.go:142] acquiring lock: {Name:mkf2c060c99b8151a60e25cdfc7df7912c0c88fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:54:59.614236  103685 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:54:59.614847  103685 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-2258/kubeconfig: {Name:mka70734b2114420160cdb9aedbb0d97125ea129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:54:59.615081  103685 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 19:54:59.615108  103685 start.go:241] waiting for startup goroutines ...
	I1216 19:54:59.615116  103685 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 19:54:59.615382  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:54:59.619643  103685 out.go:177] * Enabled addons: 
	I1216 19:54:59.622371  103685 addons.go:510] duration metric: took 7.248467ms for enable addons: enabled=[]
	I1216 19:54:59.622413  103685 start.go:246] waiting for cluster config update ...
	I1216 19:54:59.622423  103685 start.go:255] writing updated cluster config ...
	I1216 19:54:59.625414  103685 out.go:201] 
	I1216 19:54:59.628384  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:54:59.628496  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:54:59.631440  103685 out.go:177] * Starting "ha-082404-m02" control-plane node in "ha-082404" cluster
	I1216 19:54:59.634101  103685 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 19:54:59.636722  103685 out.go:177] * Pulling base image v0.0.45-1734029593-20090 ...
	I1216 19:54:59.639322  103685 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 19:54:59.639344  103685 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1216 19:54:59.639347  103685 cache.go:56] Caching tarball of preloaded images
	I1216 19:54:59.639521  103685 preload.go:172] Found /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 19:54:59.639532  103685 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 19:54:59.639667  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:54:59.666296  103685 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon, skipping pull
	I1216 19:54:59.666321  103685 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in daemon, skipping load
	I1216 19:54:59.666339  103685 cache.go:194] Successfully downloaded all kic artifacts
	I1216 19:54:59.666364  103685 start.go:360] acquireMachinesLock for ha-082404-m02: {Name:mk30a416a7c89b14eeb36c6dcc0c87eda00f817a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 19:54:59.666459  103685 start.go:364] duration metric: took 74.854µs to acquireMachinesLock for "ha-082404-m02"
	I1216 19:54:59.666488  103685 start.go:96] Skipping create...Using existing machine configuration
	I1216 19:54:59.666500  103685 fix.go:54] fixHost starting: m02
	I1216 19:54:59.666795  103685 cli_runner.go:164] Run: docker container inspect ha-082404-m02 --format={{.State.Status}}
	I1216 19:54:59.683448  103685 fix.go:112] recreateIfNeeded on ha-082404-m02: state=Stopped err=<nil>
	W1216 19:54:59.683475  103685 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 19:54:59.686569  103685 out.go:177] * Restarting existing docker container for "ha-082404-m02" ...
	I1216 19:54:59.689312  103685 cli_runner.go:164] Run: docker start ha-082404-m02
	I1216 19:54:59.992531  103685 cli_runner.go:164] Run: docker container inspect ha-082404-m02 --format={{.State.Status}}
	I1216 19:55:00.038531  103685 kic.go:430] container "ha-082404-m02" state is running.
	I1216 19:55:00.039194  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m02
	I1216 19:55:00.088761  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:55:00.089057  103685 machine.go:93] provisionDockerMachine start ...
	I1216 19:55:00.089127  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:00.136382  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:55:00.137258  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1216 19:55:00.137292  103685 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 19:55:00.139151  103685 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40988->127.0.0.1:32833: read: connection reset by peer
	I1216 19:55:03.421424  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-082404-m02
	
	I1216 19:55:03.421452  103685 ubuntu.go:169] provisioning hostname "ha-082404-m02"
	I1216 19:55:03.421523  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:03.458046  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:55:03.458300  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1216 19:55:03.458319  103685 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-082404-m02 && echo "ha-082404-m02" | sudo tee /etc/hostname
	I1216 19:55:03.670477  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-082404-m02
	
	I1216 19:55:03.670576  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:03.703537  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:55:03.703797  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1216 19:55:03.703820  103685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-082404-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-082404-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-082404-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 19:55:03.910025  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 19:55:03.910097  103685 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20091-2258/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-2258/.minikube}
	I1216 19:55:03.910150  103685 ubuntu.go:177] setting up certificates
	I1216 19:55:03.910174  103685 provision.go:84] configureAuth start
	I1216 19:55:03.910268  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m02
	I1216 19:55:03.940499  103685 provision.go:143] copyHostCerts
	I1216 19:55:03.940541  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem
	I1216 19:55:03.940575  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem, removing ...
	I1216 19:55:03.940582  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem
	I1216 19:55:03.940661  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem (1082 bytes)
	I1216 19:55:03.940740  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem
	I1216 19:55:03.940758  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem, removing ...
	I1216 19:55:03.940762  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem
	I1216 19:55:03.940789  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem (1123 bytes)
	I1216 19:55:03.940831  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem
	I1216 19:55:03.940847  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem, removing ...
	I1216 19:55:03.940851  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem
	I1216 19:55:03.940874  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem (1675 bytes)
	I1216 19:55:03.940916  103685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem org=jenkins.ha-082404-m02 san=[127.0.0.1 192.168.49.3 ha-082404-m02 localhost minikube]
	I1216 19:55:04.776804  103685 provision.go:177] copyRemoteCerts
	I1216 19:55:04.776962  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 19:55:04.777031  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:04.814135  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m02/id_rsa Username:docker}
	I1216 19:55:04.953594  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 19:55:04.953659  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 19:55:05.047654  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 19:55:05.047724  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 19:55:05.145481  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 19:55:05.145553  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 19:55:05.240923  103685 provision.go:87] duration metric: took 1.330722824s to configureAuth
	I1216 19:55:05.240989  103685 ubuntu.go:193] setting minikube options for container-runtime
	I1216 19:55:05.241247  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:55:05.241328  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:05.277391  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:55:05.277630  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1216 19:55:05.277648  103685 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 19:55:05.529057  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 19:55:05.529093  103685 ubuntu.go:71] root file system type: overlay
	I1216 19:55:05.529256  103685 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 19:55:05.529336  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:05.557743  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:55:05.558030  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1216 19:55:05.558124  103685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 19:55:05.918617  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 19:55:05.918721  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:05.955939  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:55:05.956182  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1216 19:55:05.956206  103685 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 19:55:06.272638  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 19:55:06.272666  103685 machine.go:96] duration metric: took 6.183599134s to provisionDockerMachine
	I1216 19:55:06.272688  103685 start.go:293] postStartSetup for "ha-082404-m02" (driver="docker")
	I1216 19:55:06.272702  103685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 19:55:06.272812  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 19:55:06.272870  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:06.302083  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m02/id_rsa Username:docker}
	I1216 19:55:06.455395  103685 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 19:55:06.468222  103685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 19:55:06.468258  103685 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 19:55:06.468269  103685 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 19:55:06.468276  103685 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 19:55:06.468286  103685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-2258/.minikube/addons for local assets ...
	I1216 19:55:06.468348  103685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-2258/.minikube/files for local assets ...
	I1216 19:55:06.468419  103685 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> 75692.pem in /etc/ssl/certs
	I1216 19:55:06.468426  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> /etc/ssl/certs/75692.pem
	I1216 19:55:06.468566  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 19:55:06.515880  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem --> /etc/ssl/certs/75692.pem (1708 bytes)
	I1216 19:55:06.621375  103685 start.go:296] duration metric: took 348.670218ms for postStartSetup
	I1216 19:55:06.621521  103685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:55:06.621585  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:06.651491  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m02/id_rsa Username:docker}
	I1216 19:55:06.870190  103685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 19:55:06.882948  103685 fix.go:56] duration metric: took 7.216431829s for fixHost
	I1216 19:55:06.882979  103685 start.go:83] releasing machines lock for "ha-082404-m02", held for 7.216502423s
	I1216 19:55:06.883051  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m02
	I1216 19:55:06.910860  103685 out.go:177] * Found network options:
	I1216 19:55:06.913461  103685 out.go:177]   - NO_PROXY=192.168.49.2
	W1216 19:55:06.916056  103685 proxy.go:119] fail to check proxy env: Error ip not in block
	W1216 19:55:06.916096  103685 proxy.go:119] fail to check proxy env: Error ip not in block
	I1216 19:55:06.916173  103685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 19:55:06.916226  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:06.916474  103685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 19:55:06.916529  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m02
	I1216 19:55:06.944136  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m02/id_rsa Username:docker}
	I1216 19:55:06.962005  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m02/id_rsa Username:docker}
	I1216 19:55:07.107812  103685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1216 19:55:07.419056  103685 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1216 19:55:07.419148  103685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 19:55:07.462281  103685 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 19:55:07.462313  103685 start.go:495] detecting cgroup driver to use...
	I1216 19:55:07.462349  103685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 19:55:07.462449  103685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 19:55:07.508709  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1216 19:55:07.554955  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 19:55:07.586888  103685 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 19:55:07.586998  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 19:55:07.624519  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 19:55:07.663465  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 19:55:07.702298  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 19:55:07.798132  103685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 19:55:07.864468  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 19:55:07.884618  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 19:55:07.897281  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 19:55:07.912463  103685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 19:55:07.922985  103685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 19:55:07.932600  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:55:08.273297  103685 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 19:55:18.949564  103685 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.676216235s)
	I1216 19:55:18.949594  103685 start.go:495] detecting cgroup driver to use...
	I1216 19:55:18.949631  103685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 19:55:18.949684  103685 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 19:55:18.986837  103685 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1216 19:55:18.986923  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 19:55:19.029599  103685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 19:55:19.071750  103685 ssh_runner.go:195] Run: which cri-dockerd
	I1216 19:55:19.087660  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 19:55:19.114528  103685 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1216 19:55:19.162019  103685 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 19:55:19.331558  103685 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 19:55:19.483311  103685 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 19:55:19.483351  103685 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 19:55:19.536577  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:55:19.726366  103685 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 19:55:20.529557  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 19:55:20.542543  103685 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 19:55:20.565866  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 19:55:20.580744  103685 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 19:55:20.728570  103685 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 19:55:20.878448  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:55:21.028622  103685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 19:55:21.053976  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 19:55:21.068190  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:55:21.226576  103685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 19:55:21.367914  103685 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 19:55:21.367992  103685 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 19:55:21.373459  103685 start.go:563] Will wait 60s for crictl version
	I1216 19:55:21.373602  103685 ssh_runner.go:195] Run: which crictl
	I1216 19:55:21.386387  103685 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 19:55:21.460177  103685 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1216 19:55:21.460289  103685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 19:55:21.516482  103685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 19:55:21.573328  103685 out.go:235] * Preparing Kubernetes v1.32.0 on Docker 27.4.0 ...
	I1216 19:55:21.576172  103685 out.go:177]   - env NO_PROXY=192.168.49.2
	I1216 19:55:21.579121  103685 cli_runner.go:164] Run: docker network inspect ha-082404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 19:55:21.597795  103685 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 19:55:21.607190  103685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:55:21.626026  103685 mustload.go:65] Loading cluster: ha-082404
	I1216 19:55:21.626269  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:55:21.626552  103685 cli_runner.go:164] Run: docker container inspect ha-082404 --format={{.State.Status}}
	I1216 19:55:21.650189  103685 host.go:66] Checking if "ha-082404" exists ...
	I1216 19:55:21.650468  103685 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404 for IP: 192.168.49.3
	I1216 19:55:21.650476  103685 certs.go:194] generating shared ca certs ...
	I1216 19:55:21.650489  103685 certs.go:226] acquiring lock for ca certs: {Name:mk61ac4ce13eccd2c732f8ba869cb043f9f7a744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:55:21.650597  103685 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key
	I1216 19:55:21.650636  103685 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key
	I1216 19:55:21.650644  103685 certs.go:256] generating profile certs ...
	I1216 19:55:21.650716  103685 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.key
	I1216 19:55:21.650776  103685 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key.a8075758
	I1216 19:55:21.650813  103685 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.key
	I1216 19:55:21.650820  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 19:55:21.650832  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 19:55:21.650842  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 19:55:21.650854  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 19:55:21.650876  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 19:55:21.650890  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 19:55:21.650911  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 19:55:21.650922  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 19:55:21.650988  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem (1338 bytes)
	W1216 19:55:21.651021  103685 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569_empty.pem, impossibly tiny 0 bytes
	I1216 19:55:21.651028  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 19:55:21.651051  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem (1082 bytes)
	I1216 19:55:21.651083  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem (1123 bytes)
	I1216 19:55:21.651103  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem (1675 bytes)
	I1216 19:55:21.651146  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem (1708 bytes)
	I1216 19:55:21.651174  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem -> /usr/share/ca-certificates/7569.pem
	I1216 19:55:21.651188  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> /usr/share/ca-certificates/75692.pem
	I1216 19:55:21.651198  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:55:21.651255  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:55:21.679379  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404/id_rsa Username:docker}
	I1216 19:55:21.778150  103685 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1216 19:55:21.782417  103685 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1216 19:55:21.800985  103685 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1216 19:55:21.805624  103685 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1216 19:55:21.821748  103685 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1216 19:55:21.826269  103685 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1216 19:55:21.844554  103685 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1216 19:55:21.849139  103685 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1216 19:55:21.866105  103685 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1216 19:55:21.870094  103685 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1216 19:55:21.883896  103685 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1216 19:55:21.887737  103685 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1216 19:55:21.902032  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 19:55:21.937291  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 19:55:22.015271  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 19:55:22.109960  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 19:55:22.207751  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 19:55:22.316817  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 19:55:22.419036  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 19:55:22.626344  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 19:55:22.749534  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem --> /usr/share/ca-certificates/7569.pem (1338 bytes)
	I1216 19:55:22.940495  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem --> /usr/share/ca-certificates/75692.pem (1708 bytes)
	I1216 19:55:23.045839  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 19:55:23.093922  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1216 19:55:23.181797  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1216 19:55:23.206553  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1216 19:55:23.259794  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1216 19:55:23.481206  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1216 19:55:23.569983  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1216 19:55:23.631141  103685 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1216 19:55:23.654029  103685 ssh_runner.go:195] Run: openssl version
	I1216 19:55:23.668189  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7569.pem && ln -fs /usr/share/ca-certificates/7569.pem /etc/ssl/certs/7569.pem"
	I1216 19:55:23.690238  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7569.pem
	I1216 19:55:23.694563  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/7569.pem
	I1216 19:55:23.694678  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7569.pem
	I1216 19:55:23.706744  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7569.pem /etc/ssl/certs/51391683.0"
	I1216 19:55:23.723826  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75692.pem && ln -fs /usr/share/ca-certificates/75692.pem /etc/ssl/certs/75692.pem"
	I1216 19:55:23.750473  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75692.pem
	I1216 19:55:23.763431  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/75692.pem
	I1216 19:55:23.763552  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75692.pem
	I1216 19:55:23.781626  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75692.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 19:55:23.804241  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 19:55:23.834870  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:55:23.839678  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:55:23.839783  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:55:23.847924  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 19:55:23.858111  103685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 19:55:23.886190  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 19:55:23.915107  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 19:55:23.944387  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 19:55:23.967631  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 19:55:23.987856  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 19:55:24.006193  103685 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 19:55:24.029296  103685 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.32.0 docker true true} ...
	I1216 19:55:24.029467  103685 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-082404-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:ha-082404 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 19:55:24.029501  103685 kube-vip.go:115] generating kube-vip config ...
	I1216 19:55:24.029565  103685 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1216 19:55:24.067949  103685 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1216 19:55:24.068035  103685 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1216 19:55:24.068113  103685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 19:55:24.100521  103685 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 19:55:24.100620  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1216 19:55:24.134223  103685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 19:55:24.270591  103685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 19:55:24.400174  103685 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1216 19:55:24.495127  103685 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 19:55:24.516790  103685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:55:24.588634  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:55:24.836722  103685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 19:55:24.851927  103685 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 19:55:24.852248  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:55:24.856762  103685 out.go:177] * Verifying Kubernetes components...
	I1216 19:55:24.859293  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:55:25.070341  103685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 19:55:25.105049  103685 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:55:25.105396  103685 kapi.go:59] client config for ha-082404: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.crt", KeyFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.key", CAFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1eafe20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 19:55:25.105477  103685 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 19:55:25.105762  103685 node_ready.go:35] waiting up to 6m0s for node "ha-082404-m02" to be "Ready" ...
	I1216 19:55:25.105879  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:25.105894  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.105904  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.105911  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.128275  103685 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1216 19:55:25.131445  103685 node_ready.go:49] node "ha-082404-m02" has status "Ready":"True"
	I1216 19:55:25.131474  103685 node_ready.go:38] duration metric: took 25.691092ms for node "ha-082404-m02" to be "Ready" ...
	I1216 19:55:25.131485  103685 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 19:55:25.131533  103685 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 19:55:25.131551  103685 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 19:55:25.131611  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1216 19:55:25.131622  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.131631  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.131635  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.144276  103685 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1216 19:55:25.160272  103685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9th4p" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.160379  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-9th4p
	I1216 19:55:25.160392  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.160402  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.160413  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.165794  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:25.166949  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:25.166972  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.166983  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.166987  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.171902  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:25.172862  103685 pod_ready.go:93] pod "coredns-668d6bf9bc-9th4p" in "kube-system" namespace has status "Ready":"True"
	I1216 19:55:25.172888  103685 pod_ready.go:82] duration metric: took 12.575089ms for pod "coredns-668d6bf9bc-9th4p" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.172900  103685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mwl2r" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.172989  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-mwl2r
	I1216 19:55:25.172999  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.173009  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.173012  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.178282  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:25.179299  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:25.179322  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.179333  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.179338  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.184016  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:25.184807  103685 pod_ready.go:93] pod "coredns-668d6bf9bc-mwl2r" in "kube-system" namespace has status "Ready":"True"
	I1216 19:55:25.184831  103685 pod_ready.go:82] duration metric: took 11.915736ms for pod "coredns-668d6bf9bc-mwl2r" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.184844  103685 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.184921  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-082404
	I1216 19:55:25.184931  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.184940  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.184945  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.189559  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:25.190760  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:25.190781  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.190790  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.190795  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.193210  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:25.194052  103685 pod_ready.go:93] pod "etcd-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:55:25.194075  103685 pod_ready.go:82] duration metric: took 9.217454ms for pod "etcd-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.194088  103685 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.194166  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-082404-m02
	I1216 19:55:25.194176  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.194184  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.194192  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.198494  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:25.199544  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:25.199564  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.199574  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.199578  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.202223  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:25.203109  103685 pod_ready.go:93] pod "etcd-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:55:25.203129  103685 pod_ready.go:82] duration metric: took 9.029644ms for pod "etcd-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.203141  103685 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.306745  103685 request.go:632] Waited for 103.527586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-082404-m03
	I1216 19:55:25.306807  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-082404-m03
	I1216 19:55:25.306816  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.306827  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.306834  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.310211  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:25.506572  103685 request.go:632] Waited for 195.24444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:55:25.506639  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:55:25.506647  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.506656  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.506667  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.509682  103685 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1216 19:55:25.510157  103685 pod_ready.go:98] node "ha-082404-m03" hosting pod "etcd-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:55:25.510177  103685 pod_ready.go:82] duration metric: took 307.021033ms for pod "etcd-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	E1216 19:55:25.510192  103685 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-082404-m03" hosting pod "etcd-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:55:25.510221  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.706522  103685 request.go:632] Waited for 196.219352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404
	I1216 19:55:25.706580  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404
	I1216 19:55:25.706590  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.706599  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.706603  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.709499  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:25.906544  103685 request.go:632] Waited for 196.20958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:25.906670  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:25.906716  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:25.906749  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:25.906772  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:25.911126  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:25.912532  103685 pod_ready.go:93] pod "kube-apiserver-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:55:25.912604  103685 pod_ready.go:82] duration metric: took 402.367448ms for pod "kube-apiserver-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:25.912675  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:26.106813  103685 request.go:632] Waited for 194.003183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404-m02
	I1216 19:55:26.106930  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404-m02
	I1216 19:55:26.106944  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:26.106953  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:26.106958  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:26.110226  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:26.306552  103685 request.go:632] Waited for 195.345332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:26.306658  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:26.306672  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:26.306682  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:26.306693  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:26.309443  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:26.310047  103685 pod_ready.go:93] pod "kube-apiserver-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:55:26.310071  103685 pod_ready.go:82] duration metric: took 397.372938ms for pod "kube-apiserver-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:26.310084  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:26.505949  103685 request.go:632] Waited for 195.780248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404-m03
	I1216 19:55:26.506037  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404-m03
	I1216 19:55:26.506049  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:26.506058  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:26.506062  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:26.509007  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:26.705970  103685 request.go:632] Waited for 196.229087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:55:26.706034  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:55:26.706041  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:26.706050  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:26.706056  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:26.711100  103685 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I1216 19:55:26.711922  103685 pod_ready.go:98] node "ha-082404-m03" hosting pod "kube-apiserver-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:55:26.711948  103685 pod_ready.go:82] duration metric: took 401.856071ms for pod "kube-apiserver-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	E1216 19:55:26.711960  103685 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-082404-m03" hosting pod "kube-apiserver-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:55:26.711969  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:26.906226  103685 request.go:632] Waited for 194.136589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:26.906338  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:26.906376  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:26.906404  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:26.906422  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:26.909406  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:27.106393  103685 request.go:632] Waited for 195.318856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:27.106478  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:27.106492  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:27.106542  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:27.106546  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:27.109508  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:27.306320  103685 request.go:632] Waited for 93.141374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:27.306391  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:27.306404  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:27.306414  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:27.306419  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:27.309527  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:27.506675  103685 request.go:632] Waited for 196.268082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:27.506751  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:27.506763  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:27.506773  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:27.506791  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:27.509888  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:27.712525  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:27.712551  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:27.712562  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:27.712566  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:27.716140  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:27.906379  103685 request.go:632] Waited for 189.198247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:27.906452  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:27.906464  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:27.906472  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:27.906479  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:27.909221  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:28.212227  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:28.212252  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:28.212261  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:28.212264  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:28.215409  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:28.306156  103685 request.go:632] Waited for 89.700659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:28.306218  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:28.306227  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:28.306236  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:28.306241  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:28.308923  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:28.713168  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:28.713193  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:28.713203  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:28.713207  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:28.716013  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:28.716909  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:28.716929  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:28.716939  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:28.716945  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:28.719477  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:28.720001  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:29.212793  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:29.212821  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:29.212836  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:29.212843  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:29.218265  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:29.219106  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:29.219146  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:29.219185  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:29.219211  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:29.225492  103685 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1216 19:55:29.712225  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:29.712249  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:29.712259  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:29.712264  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:29.715290  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:29.716443  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:29.716502  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:29.716528  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:29.716545  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:29.719469  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:30.212209  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:30.212263  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:30.212280  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:30.212290  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:30.217367  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:30.219240  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:30.219271  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:30.219281  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:30.219303  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:30.223429  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:30.712944  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:30.712968  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:30.712992  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:30.712997  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:30.716634  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:30.717934  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:30.717954  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:30.717964  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:30.717983  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:30.720779  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:30.721709  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:31.212907  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:31.212942  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:31.212952  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:31.212956  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:31.216672  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:31.217842  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:31.217864  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:31.217874  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:31.217879  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:31.221159  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:31.712877  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:31.712905  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:31.712915  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:31.712921  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:31.716250  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:31.717123  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:31.717144  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:31.717153  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:31.717158  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:31.720058  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:32.212506  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:32.212587  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:32.212641  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:32.212673  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:32.216330  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:32.217428  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:32.217487  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:32.217510  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:32.217529  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:32.220714  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:32.712252  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:32.712338  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:32.712362  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:32.712381  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:32.716163  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:32.717562  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:32.717622  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:32.717644  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:32.717664  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:32.721327  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:32.722472  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:33.212850  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:33.212883  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:33.212893  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:33.212897  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:33.215746  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:33.216939  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:33.216954  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:33.216963  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:33.216968  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:33.238659  103685 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1216 19:55:33.712187  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:33.712207  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:33.712216  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:33.712221  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:33.717777  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:33.718485  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:33.718497  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:33.718505  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:33.718510  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:33.722346  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:34.213167  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:34.213192  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:34.213202  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:34.213208  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:34.216390  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:34.217092  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:34.217114  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:34.217123  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:34.217129  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:34.219881  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:34.712704  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:34.712730  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:34.712740  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:34.712745  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:34.715715  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:34.716955  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:34.716971  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:34.716981  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:34.716987  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:34.719685  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:35.212232  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:35.212255  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:35.212265  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:35.212269  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:35.215656  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:35.216580  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:35.216601  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:35.216612  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:35.216616  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:35.219428  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:35.220026  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:35.712991  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:35.713017  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:35.713028  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:35.713033  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:35.716060  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:35.717056  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:35.717079  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:35.717089  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:35.717094  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:35.719764  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:36.213055  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:36.213083  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:36.213093  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:36.213097  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:36.216327  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:36.217607  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:36.217695  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:36.217709  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:36.217715  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:36.220812  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:36.712218  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:36.712243  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:36.712253  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:36.712258  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:36.715134  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:36.715877  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:36.715893  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:36.715902  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:36.715907  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:36.718328  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:37.212544  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:37.212568  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:37.212578  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:37.212581  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:37.215728  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:37.216590  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:37.216609  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:37.216619  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:37.216625  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:37.219296  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:37.712211  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:37.712238  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:37.712249  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:37.712253  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:37.715236  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:37.715907  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:37.715936  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:37.715945  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:37.715948  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:37.718574  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:37.719157  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:38.212230  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:38.212263  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:38.212282  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:38.212286  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:38.215354  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:38.216122  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:38.216158  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:38.216169  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:38.216187  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:38.219086  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:38.712255  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:38.712280  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:38.712289  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:38.712294  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:38.716944  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:38.717993  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:38.718013  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:38.718024  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:38.718028  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:38.720817  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:39.213204  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:39.213230  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:39.213240  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:39.213246  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:39.216241  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:39.217366  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:39.217392  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:39.217402  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:39.217406  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:39.220158  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:39.712409  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:39.712437  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:39.712447  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:39.712453  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:39.715348  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:39.716426  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:39.716443  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:39.716452  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:39.716457  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:39.719405  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:39.719914  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:40.212702  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:40.212721  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:40.212731  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:40.212734  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:40.216536  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:40.217297  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:40.217323  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:40.217332  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:40.217337  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:40.221184  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:40.712187  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:40.712214  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:40.712224  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:40.712230  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:40.715074  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:40.716186  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:40.716246  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:40.716269  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:40.716289  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:40.718966  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:41.212113  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:41.212133  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:41.212148  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:41.212152  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:41.221923  103685 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1216 19:55:41.223077  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:41.223094  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:41.223104  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:41.223110  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:41.234284  103685 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1216 19:55:41.713013  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:41.713031  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:41.713040  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:41.713044  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:41.727694  103685 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1216 19:55:41.728884  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:41.728901  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:41.728910  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:41.728915  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:41.758360  103685 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1216 19:55:41.759387  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:42.213086  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:42.213110  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:42.213120  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:42.213124  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:42.229786  103685 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1216 19:55:42.230667  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:42.230684  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:42.230692  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:42.230696  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:42.237667  103685 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1216 19:55:42.712949  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:42.712967  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:42.712976  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:42.712981  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:42.746824  103685 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1216 19:55:42.747567  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:42.747583  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:42.747592  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:42.747595  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:42.762485  103685 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1216 19:55:43.212259  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:43.212281  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:43.212291  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:43.212297  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:43.225759  103685 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1216 19:55:43.226968  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:43.226986  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:43.226996  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:43.227004  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:43.239485  103685 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1216 19:55:43.713028  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:43.713046  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:43.713056  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:43.713060  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:43.724510  103685 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1216 19:55:43.725695  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:43.725712  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:43.725721  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:43.725727  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:43.740178  103685 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1216 19:55:44.212177  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:44.212196  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:44.212205  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:44.212208  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:44.225183  103685 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1216 19:55:44.226335  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:44.226351  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:44.226360  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:44.226364  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:44.230702  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:44.231588  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:44.712206  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:44.712238  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:44.712249  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:44.712256  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:44.715275  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:44.716614  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:44.716631  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:44.716639  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:44.716643  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:44.719876  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:45.212796  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:45.212822  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:45.212833  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:45.212838  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:45.225107  103685 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1216 19:55:45.226663  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:45.226685  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:45.226695  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:45.226700  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:45.257691  103685 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1216 19:55:45.712174  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:45.712198  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:45.712208  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:45.712212  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:45.715138  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:45.715833  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:45.715847  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:45.715857  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:45.715862  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:45.718262  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:46.212311  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:46.212336  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:46.212347  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:46.212351  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:46.215514  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:46.217439  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:46.217459  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:46.217468  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:46.217473  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:46.222554  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:46.712997  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:46.713021  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:46.713030  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:46.713036  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:46.716121  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:46.716949  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:46.716970  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:46.716980  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:46.716983  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:46.719456  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:46.720093  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:47.212205  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:47.212228  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:47.212237  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:47.212241  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:47.215216  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:47.216032  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:47.216092  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:47.216103  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:47.216107  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:47.224911  103685 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1216 19:55:47.713073  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:47.713097  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:47.713113  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:47.713117  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:47.716414  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:47.717113  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:47.717124  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:47.717133  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:47.717139  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:47.719985  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:48.212914  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:48.212940  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:48.212949  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:48.212955  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:48.215914  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:48.217172  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:48.217201  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:48.217211  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:48.217215  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:48.219965  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:48.712147  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:48.712173  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:48.712184  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:48.712189  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:48.715144  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:48.715948  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:48.715966  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:48.715975  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:48.715982  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:48.718654  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:49.212732  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:49.212756  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:49.212766  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:49.212770  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:49.215778  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:49.216545  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:49.216562  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:49.216572  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:49.216576  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:49.223372  103685 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1216 19:55:49.223887  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:49.712430  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:49.712455  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:49.712465  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:49.712469  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:49.715408  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:49.716312  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:49.716335  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:49.716356  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:49.716361  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:49.719130  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:50.212583  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:50.212607  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:50.212616  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:50.212621  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:50.215582  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:50.216536  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:50.216556  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:50.216565  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:50.216571  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:50.219210  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:50.712785  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:50.712809  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:50.712818  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:50.712824  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:50.716075  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:50.716935  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:50.716953  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:50.716963  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:50.716969  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:50.719722  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:51.212916  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:51.212941  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:51.212951  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:51.212957  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:51.215989  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:51.216804  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:51.216846  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:51.216890  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:51.216915  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:51.226587  103685 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1216 19:55:51.227391  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:51.713057  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:51.713083  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:51.713094  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:51.713101  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:51.716214  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:51.716990  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:51.717009  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:51.717019  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:51.717026  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:51.719693  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:52.212797  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:52.212823  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:52.212833  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:52.212837  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:52.215770  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:52.216750  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:52.216768  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:52.216777  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:52.216781  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:52.224569  103685 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1216 19:55:52.712182  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:52.712204  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:52.712213  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:52.712219  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:52.718161  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:52.719039  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:52.719091  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:52.719113  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:52.719131  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:52.725547  103685 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1216 19:55:53.212388  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:53.212413  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:53.212424  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:53.212430  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:53.215492  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:53.216772  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:53.216799  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:53.216809  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:53.216813  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:53.219421  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:53.712969  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:53.712994  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:53.713010  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:53.713034  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:53.716221  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:53.716917  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:53.716933  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:53.716942  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:53.716948  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:53.719682  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:53.720240  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:54.212578  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:54.212603  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:54.212614  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:54.212618  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:54.216205  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:54.217135  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:54.217158  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:54.217168  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:54.217172  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:54.220408  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:54.712193  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:54.712220  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:54.712229  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:54.712233  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:54.715658  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:54.716647  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:54.716666  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:54.716676  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:54.716680  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:54.719360  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:55.212681  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:55.212706  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:55.212716  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:55.212720  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:55.215879  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:55.216581  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:55.216591  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:55.216599  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:55.216603  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:55.219690  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:55.713088  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:55.713113  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:55.713123  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:55.713127  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:55.716258  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:55.716906  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:55.716917  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:55.716925  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:55.716929  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:55.719373  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:56.212538  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:55:56.212563  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:56.212572  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:56.212577  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:56.215363  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:56.216109  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:55:56.216129  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:56.216139  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:56.216145  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:56.223684  103685 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1216 19:55:56.224157  103685 pod_ready.go:93] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:55:56.224178  103685 pod_ready.go:82] duration metric: took 29.512196236s for pod "kube-controller-manager-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:56.224197  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:55:56.224266  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:56.224277  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:56.224285  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:56.224290  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:56.226876  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:56.227523  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:56.227544  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:56.227553  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:56.227564  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:56.230165  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:56.725211  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:56.725243  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:56.725253  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:56.725257  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:56.728559  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:56.729538  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:56.729557  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:56.729569  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:56.729580  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:56.738833  103685 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1216 19:55:57.224837  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:57.224856  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:57.224865  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:57.224870  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:57.235546  103685 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1216 19:55:57.236355  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:57.236369  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:57.236379  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:57.236383  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:57.241555  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:55:57.724421  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:57.724447  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:57.724457  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:57.724461  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:57.727386  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:57.728265  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:57.728311  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:57.728335  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:57.728355  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:57.731607  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:58.225234  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:58.225259  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:58.225268  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:58.225272  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:58.228727  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:55:58.229973  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:58.229996  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:58.230006  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:58.230010  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:58.232747  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:58.233354  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:55:58.724722  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:58.724745  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:58.724756  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:58.724761  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:58.727738  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:58.728569  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:58.728592  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:58.728602  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:58.728609  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:58.732661  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:55:59.225336  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:59.225360  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:59.225369  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:59.225374  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:59.228191  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:59.229224  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:59.229243  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:59.229253  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:59.229257  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:59.231870  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:59.725225  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:55:59.725251  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:59.725261  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:59.725269  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:59.728259  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:55:59.729049  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:55:59.729097  103685 round_trippers.go:469] Request Headers:
	I1216 19:55:59.729125  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:55:59.729149  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:55:59.732447  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:00.225106  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:00.225142  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:00.225166  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:00.225174  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:00.230377  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:56:00.239810  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:00.239833  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:00.239841  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:00.239845  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:00.270052  103685 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1216 19:56:00.270596  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:00.725022  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:00.725048  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:00.725057  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:00.725061  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:00.727994  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:00.728914  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:00.728935  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:00.728945  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:00.728949  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:00.736767  103685 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1216 19:56:01.225339  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:01.225373  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:01.225385  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:01.225391  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:01.229490  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:01.230405  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:01.230430  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:01.230441  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:01.230446  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:01.233958  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:01.724544  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:01.724614  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:01.724666  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:01.724691  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:01.728975  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:01.730492  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:01.730513  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:01.730522  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:01.730526  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:01.733196  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:02.225187  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:02.225211  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:02.225223  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:02.225230  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:02.228173  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:02.229193  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:02.229210  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:02.229219  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:02.229222  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:02.231917  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:02.724994  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:02.725021  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:02.725031  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:02.725042  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:02.727987  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:02.728780  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:02.728798  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:02.728807  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:02.728812  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:02.738964  103685 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1216 19:56:02.739636  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:03.224520  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:03.224542  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:03.224552  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:03.224558  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:03.227493  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:03.228182  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:03.228192  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:03.228201  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:03.228205  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:03.230879  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:03.725092  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:03.725116  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:03.725126  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:03.725130  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:03.728149  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:03.728948  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:03.728964  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:03.728974  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:03.728979  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:03.732812  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:04.224978  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:04.224998  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:04.225007  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:04.225013  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:04.227993  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:04.228904  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:04.228923  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:04.228932  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:04.228936  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:04.231471  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:04.724532  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:04.724556  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:04.724565  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:04.724571  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:04.727627  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:04.728403  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:04.728462  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:04.728478  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:04.728484  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:04.738590  103685 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1216 19:56:05.224466  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:05.224486  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:05.224496  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:05.224503  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:05.227450  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:05.228229  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:05.228239  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:05.228248  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:05.228252  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:05.230766  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:05.231317  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:05.725240  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:05.725266  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:05.725281  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:05.725287  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:05.728403  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:05.729196  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:05.729214  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:05.729224  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:05.729228  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:05.739886  103685 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1216 19:56:06.225116  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:06.225146  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:06.225157  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:06.225161  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:06.228388  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:06.229443  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:06.229462  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:06.229471  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:06.229477  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:06.232251  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:06.724486  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:06.724508  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:06.724522  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:06.724526  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:06.727728  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:06.728583  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:06.728601  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:06.728610  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:06.728615  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:06.732980  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:07.225410  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:07.225431  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:07.225442  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:07.225447  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:07.228505  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:07.229334  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:07.229355  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:07.229364  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:07.229370  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:07.232170  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:07.232728  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:07.725021  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:07.725045  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:07.725054  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:07.725059  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:07.732272  103685 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1216 19:56:07.733238  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:07.733260  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:07.733270  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:07.733277  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:07.736568  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:08.224912  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:08.224936  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:08.224946  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:08.224951  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:08.228358  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:08.229669  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:08.229690  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:08.229700  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:08.229705  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:08.232601  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:08.724430  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:08.724456  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:08.724467  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:08.724476  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:08.727791  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:08.729191  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:08.729215  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:08.729225  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:08.729230  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:08.739485  103685 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1216 19:56:09.225361  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:09.225401  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:09.225415  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:09.225421  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:09.230915  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:56:09.240924  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:09.240950  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:09.240960  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:09.240971  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:09.250310  103685 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1216 19:56:09.250990  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:09.725171  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:09.725239  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:09.725264  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:09.725284  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:09.728894  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:09.730230  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:09.730299  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:09.730323  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:09.730342  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:09.740832  103685 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1216 19:56:10.225245  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:10.225267  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:10.225277  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:10.225281  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:10.229765  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:10.231122  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:10.231178  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:10.231211  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:10.231229  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:10.234188  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:10.725143  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:10.725215  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:10.725237  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:10.725265  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:10.729280  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:10.736975  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:10.737045  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:10.737070  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:10.737087  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:10.749891  103685 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1216 19:56:11.224785  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:11.224806  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:11.224816  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:11.224820  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:11.227897  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:11.228760  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:11.228813  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:11.228836  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:11.228855  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:11.231460  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:11.724708  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:11.724735  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:11.724743  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:11.724747  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:11.728919  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:11.730129  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:11.730151  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:11.730165  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:11.730173  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:11.734151  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:11.734687  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:12.225000  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:12.225032  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:12.225043  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:12.225047  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:12.227988  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:12.228784  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:12.228805  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:12.228815  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:12.228820  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:12.231696  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:12.724494  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:12.724517  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:12.724527  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:12.724531  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:12.727731  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:12.728784  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:12.728800  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:12.728808  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:12.728813  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:12.732751  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:13.224940  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:13.224965  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:13.224975  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:13.224979  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:13.228234  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:13.229084  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:13.229103  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:13.229112  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:13.229119  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:13.231882  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:13.724457  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:13.724480  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:13.724490  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:13.724493  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:13.727519  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:13.728335  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:13.728355  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:13.728365  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:13.728391  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:13.731971  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:14.224551  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:14.224574  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:14.224584  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:14.224588  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:14.227614  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:14.228376  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:14.228398  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:14.228408  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:14.228414  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:14.231233  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:14.232006  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:14.725360  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:14.725396  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:14.725407  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:14.725413  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:14.728382  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:14.729877  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:14.729894  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:14.729904  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:14.729909  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:14.735028  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:56:15.225207  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:15.225236  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:15.225246  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:15.225251  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:15.228353  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:15.229134  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:15.229151  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:15.229161  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:15.229165  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:15.232007  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:15.724472  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:15.724498  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:15.724508  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:15.724514  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:15.727828  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:15.728647  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:15.728682  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:15.728694  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:15.728703  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:15.731552  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:16.225281  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:16.225306  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:16.225316  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:16.225321  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:16.228384  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:16.229331  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:16.229352  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:16.229361  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:16.229367  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:16.232168  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:16.232808  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:16.724501  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:16.724527  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:16.724540  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:16.724544  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:16.727390  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:16.728122  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:16.728136  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:16.728145  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:16.728149  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:16.731646  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:17.224456  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:17.224478  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:17.224488  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:17.224492  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:17.227469  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:17.228184  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:17.228202  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:17.228211  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:17.228215  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:17.231017  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:17.725043  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:17.725066  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:17.725077  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:17.725081  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:17.728056  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:17.729693  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:17.729712  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:17.729721  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:17.729727  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:17.732700  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:18.225151  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:18.225183  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:18.225194  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:18.225202  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:18.228046  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:18.228700  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:18.228710  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:18.228718  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:18.228725  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:18.231197  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:18.724405  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:18.724438  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:18.724448  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:18.724453  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:18.727355  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:18.728309  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:18.728329  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:18.728339  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:18.728344  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:18.731840  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:18.732750  103685 pod_ready.go:103] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"False"
	I1216 19:56:19.225071  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:19.225096  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.225107  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.225111  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.228339  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:19.229131  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:19.229153  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.229162  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.229168  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.231684  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.232164  103685 pod_ready.go:93] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:19.232187  103685 pod_ready.go:82] duration metric: took 23.007975057s for pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.232203  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.232271  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m03
	I1216 19:56:19.232281  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.232289  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.232296  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.234971  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.235843  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:56:19.235862  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.235872  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.235876  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.238222  103685 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1216 19:56:19.238362  103685 pod_ready.go:98] node "ha-082404-m03" hosting pod "kube-controller-manager-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:56:19.238380  103685 pod_ready.go:82] duration metric: took 6.167124ms for pod "kube-controller-manager-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	E1216 19:56:19.238397  103685 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-082404-m03" hosting pod "kube-controller-manager-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:56:19.238415  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kr525" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.238483  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kr525
	I1216 19:56:19.238491  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.238498  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.238503  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.241369  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.242179  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:56:19.242205  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.242213  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.242244  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.244668  103685 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1216 19:56:19.244822  103685 pod_ready.go:98] node "ha-082404-m03" hosting pod "kube-proxy-kr525" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:56:19.244845  103685 pod_ready.go:82] duration metric: took 6.419318ms for pod "kube-proxy-kr525" in "kube-system" namespace to be "Ready" ...
	E1216 19:56:19.244855  103685 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-082404-m03" hosting pod "kube-proxy-kr525" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:56:19.244864  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pvlrj" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.244932  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvlrj
	I1216 19:56:19.244943  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.244952  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.244956  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.247831  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.248414  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m04
	I1216 19:56:19.248432  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.248441  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.248445  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.251092  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.251605  103685 pod_ready.go:93] pod "kube-proxy-pvlrj" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:19.251625  103685 pod_ready.go:82] duration metric: took 6.748556ms for pod "kube-proxy-pvlrj" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.251639  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wmg6k" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.251702  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wmg6k
	I1216 19:56:19.251714  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.251732  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.251740  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.254304  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.255206  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:19.255228  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.255238  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.255242  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.257912  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.258496  103685 pod_ready.go:93] pod "kube-proxy-wmg6k" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:19.258516  103685 pod_ready.go:82] duration metric: took 6.867379ms for pod "kube-proxy-wmg6k" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.258548  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x7xbp" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.425919  103685 request.go:632] Waited for 167.283867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7xbp
	I1216 19:56:19.425991  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7xbp
	I1216 19:56:19.426001  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.426011  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.426017  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.428924  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:19.625370  103685 request.go:632] Waited for 195.778885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:19.625466  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:19.625482  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.625496  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.625510  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.631040  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:56:19.631645  103685 pod_ready.go:93] pod "kube-proxy-x7xbp" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:19.631667  103685 pod_ready.go:82] duration metric: took 373.106318ms for pod "kube-proxy-x7xbp" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.631680  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:19.825646  103685 request.go:632] Waited for 193.899211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404
	I1216 19:56:19.825741  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404
	I1216 19:56:19.825755  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:19.825764  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:19.825769  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:19.828718  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:20.025994  103685 request.go:632] Waited for 196.3119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:20.026066  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:20.026077  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:20.026087  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:20.026091  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:20.029501  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:20.030116  103685 pod_ready.go:93] pod "kube-scheduler-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:20.030141  103685 pod_ready.go:82] duration metric: took 398.454203ms for pod "kube-scheduler-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:20.030155  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:20.226027  103685 request.go:632] Waited for 195.806039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404-m02
	I1216 19:56:20.226101  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404-m02
	I1216 19:56:20.226113  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:20.226122  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:20.226141  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:20.229284  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:20.425562  103685 request.go:632] Waited for 195.610328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:20.425645  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:20.425658  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:20.425668  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:20.425672  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:20.428627  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:20.429307  103685 pod_ready.go:93] pod "kube-scheduler-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:20.429324  103685 pod_ready.go:82] duration metric: took 399.161037ms for pod "kube-scheduler-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:20.429338  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:20.625716  103685 request.go:632] Waited for 196.314792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404-m03
	I1216 19:56:20.625786  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404-m03
	I1216 19:56:20.625796  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:20.625805  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:20.625814  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:20.628862  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:20.825815  103685 request.go:632] Waited for 196.311075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:56:20.825898  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m03
	I1216 19:56:20.825905  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:20.825918  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:20.825925  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:20.828557  103685 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1216 19:56:20.828703  103685 pod_ready.go:98] node "ha-082404-m03" hosting pod "kube-scheduler-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:56:20.828722  103685 pod_ready.go:82] duration metric: took 399.376884ms for pod "kube-scheduler-ha-082404-m03" in "kube-system" namespace to be "Ready" ...
	E1216 19:56:20.828733  103685 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-082404-m03" hosting pod "kube-scheduler-ha-082404-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-082404-m03": nodes "ha-082404-m03" not found
	I1216 19:56:20.828751  103685 pod_ready.go:39] duration metric: took 55.697255273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 19:56:20.828780  103685 api_server.go:52] waiting for apiserver process to appear ...
	I1216 19:56:20.828853  103685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 19:56:20.840568  103685 api_server.go:72] duration metric: took 55.988577826s to wait for apiserver process to appear ...
	I1216 19:56:20.840593  103685 api_server.go:88] waiting for apiserver healthz status ...
	I1216 19:56:20.840624  103685 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 19:56:20.849662  103685 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 19:56:20.849758  103685 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I1216 19:56:20.849771  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:20.849781  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:20.849787  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:20.850906  103685 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1216 19:56:20.851036  103685 api_server.go:141] control plane version: v1.32.0
	I1216 19:56:20.851056  103685 api_server.go:131] duration metric: took 10.455664ms to wait for apiserver health ...
	I1216 19:56:20.851066  103685 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 19:56:21.025495  103685 request.go:632] Waited for 174.345221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1216 19:56:21.025603  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1216 19:56:21.025619  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:21.025629  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:21.025633  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:21.031996  103685 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1216 19:56:21.041434  103685 system_pods.go:59] 26 kube-system pods found
	I1216 19:56:21.041474  103685 system_pods.go:61] "coredns-668d6bf9bc-9th4p" [56bab989-75df-426f-af86-73cef2741306] Running
	I1216 19:56:21.041482  103685 system_pods.go:61] "coredns-668d6bf9bc-mwl2r" [84f8cad3-3121-4fae-83c0-9fe5c573d6d4] Running
	I1216 19:56:21.041487  103685 system_pods.go:61] "etcd-ha-082404" [95cff35a-dcde-4bd9-89dd-05c7b42036cc] Running
	I1216 19:56:21.041491  103685 system_pods.go:61] "etcd-ha-082404-m02" [e91189dc-b8a0-4ff9-ba7a-af12b90d0c82] Running
	I1216 19:56:21.041495  103685 system_pods.go:61] "etcd-ha-082404-m03" [df692a92-f091-47fd-9a90-f007278dc5d4] Running
	I1216 19:56:21.041576  103685 system_pods.go:61] "kindnet-8nzqx" [c062cfe1-2c57-4040-8d48-673a935f60f6] Running
	I1216 19:56:21.041591  103685 system_pods.go:61] "kindnet-f7n6r" [22adac41-4ba2-4265-b6a1-f80addcffd92] Running
	I1216 19:56:21.041596  103685 system_pods.go:61] "kindnet-m64xz" [ae1a3842-84ec-4be8-bb48-9ffa21435040] Running
	I1216 19:56:21.041603  103685 system_pods.go:61] "kindnet-p6stw" [f4cb03ed-d63d-44a2-a22b-af8f0a23636c] Running
	I1216 19:56:21.041607  103685 system_pods.go:61] "kube-apiserver-ha-082404" [cb879082-55e7-4825-ab02-f366c2f09a3d] Running
	I1216 19:56:21.041611  103685 system_pods.go:61] "kube-apiserver-ha-082404-m02" [c4e969de-4014-401c-a809-c8f2f56815dd] Running
	I1216 19:56:21.041615  103685 system_pods.go:61] "kube-apiserver-ha-082404-m03" [5d2a0021-3e6e-49ee-8b43-76f233c076c1] Running
	I1216 19:56:21.041619  103685 system_pods.go:61] "kube-controller-manager-ha-082404" [1e745f98-ccc4-4511-8318-4e2456571628] Running
	I1216 19:56:21.041623  103685 system_pods.go:61] "kube-controller-manager-ha-082404-m02" [2996b9f3-2c14-4864-9e4d-82d58685df57] Running
	I1216 19:56:21.041628  103685 system_pods.go:61] "kube-controller-manager-ha-082404-m03" [7d94a045-a18c-4f87-a069-f88908ce9428] Running
	I1216 19:56:21.041632  103685 system_pods.go:61] "kube-proxy-kr525" [8b374900-b35c-42e1-8757-ce142b1cf04d] Running
	I1216 19:56:21.041645  103685 system_pods.go:61] "kube-proxy-pvlrj" [d5fc0309-78bb-42b3-a61f-82c5d4d9069e] Running
	I1216 19:56:21.041653  103685 system_pods.go:61] "kube-proxy-wmg6k" [6d50b21a-c351-47e2-9abd-9fcca1423aff] Running
	I1216 19:56:21.041657  103685 system_pods.go:61] "kube-proxy-x7xbp" [ce0d4ca6-fbc9-4f2f-996d-5bd01b41a14f] Running
	I1216 19:56:21.041661  103685 system_pods.go:61] "kube-scheduler-ha-082404" [acddb3d3-c314-439a-92db-316e5150ca22] Running
	I1216 19:56:21.041667  103685 system_pods.go:61] "kube-scheduler-ha-082404-m02" [3f0e8aae-a325-49d7-b616-4aee03dcca94] Running
	I1216 19:56:21.041671  103685 system_pods.go:61] "kube-scheduler-ha-082404-m03" [71f272ed-20a7-4ed3-a16b-7622af2210a2] Running
	I1216 19:56:21.041678  103685 system_pods.go:61] "kube-vip-ha-082404" [c70c2ea8-8fce-4883-b4bd-ac4b0f3a285d] Running
	I1216 19:56:21.041683  103685 system_pods.go:61] "kube-vip-ha-082404-m02" [d6f98a08-2873-48c7-9fd3-2b4b5cfb6154] Running
	I1216 19:56:21.041686  103685 system_pods.go:61] "kube-vip-ha-082404-m03" [de516177-52d7-4f79-9681-8090670d31da] Running
	I1216 19:56:21.041689  103685 system_pods.go:61] "storage-provisioner" [3c0d0135-4746-4b03-9877-d30c5297116e] Running
	I1216 19:56:21.041696  103685 system_pods.go:74] duration metric: took 190.623372ms to wait for pod list to return data ...
	I1216 19:56:21.041708  103685 default_sa.go:34] waiting for default service account to be created ...
	I1216 19:56:21.226129  103685 request.go:632] Waited for 184.332455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1216 19:56:21.226197  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1216 19:56:21.226204  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:21.226213  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:21.226217  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:21.229666  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:21.229999  103685 default_sa.go:45] found service account: "default"
	I1216 19:56:21.230023  103685 default_sa.go:55] duration metric: took 188.308102ms for default service account to be created ...
	I1216 19:56:21.230034  103685 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 19:56:21.425461  103685 request.go:632] Waited for 195.361083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1216 19:56:21.425540  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1216 19:56:21.425553  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:21.425562  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:21.425570  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:21.431552  103685 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1216 19:56:21.441543  103685 system_pods.go:86] 26 kube-system pods found
	I1216 19:56:21.441584  103685 system_pods.go:89] "coredns-668d6bf9bc-9th4p" [56bab989-75df-426f-af86-73cef2741306] Running
	I1216 19:56:21.441593  103685 system_pods.go:89] "coredns-668d6bf9bc-mwl2r" [84f8cad3-3121-4fae-83c0-9fe5c573d6d4] Running
	I1216 19:56:21.441598  103685 system_pods.go:89] "etcd-ha-082404" [95cff35a-dcde-4bd9-89dd-05c7b42036cc] Running
	I1216 19:56:21.441603  103685 system_pods.go:89] "etcd-ha-082404-m02" [e91189dc-b8a0-4ff9-ba7a-af12b90d0c82] Running
	I1216 19:56:21.441608  103685 system_pods.go:89] "etcd-ha-082404-m03" [df692a92-f091-47fd-9a90-f007278dc5d4] Running
	I1216 19:56:21.441612  103685 system_pods.go:89] "kindnet-8nzqx" [c062cfe1-2c57-4040-8d48-673a935f60f6] Running
	I1216 19:56:21.441616  103685 system_pods.go:89] "kindnet-f7n6r" [22adac41-4ba2-4265-b6a1-f80addcffd92] Running
	I1216 19:56:21.441620  103685 system_pods.go:89] "kindnet-m64xz" [ae1a3842-84ec-4be8-bb48-9ffa21435040] Running
	I1216 19:56:21.441625  103685 system_pods.go:89] "kindnet-p6stw" [f4cb03ed-d63d-44a2-a22b-af8f0a23636c] Running
	I1216 19:56:21.441629  103685 system_pods.go:89] "kube-apiserver-ha-082404" [cb879082-55e7-4825-ab02-f366c2f09a3d] Running
	I1216 19:56:21.441641  103685 system_pods.go:89] "kube-apiserver-ha-082404-m02" [c4e969de-4014-401c-a809-c8f2f56815dd] Running
	I1216 19:56:21.441645  103685 system_pods.go:89] "kube-apiserver-ha-082404-m03" [5d2a0021-3e6e-49ee-8b43-76f233c076c1] Running
	I1216 19:56:21.441649  103685 system_pods.go:89] "kube-controller-manager-ha-082404" [1e745f98-ccc4-4511-8318-4e2456571628] Running
	I1216 19:56:21.441654  103685 system_pods.go:89] "kube-controller-manager-ha-082404-m02" [2996b9f3-2c14-4864-9e4d-82d58685df57] Running
	I1216 19:56:21.441670  103685 system_pods.go:89] "kube-controller-manager-ha-082404-m03" [7d94a045-a18c-4f87-a069-f88908ce9428] Running
	I1216 19:56:21.441675  103685 system_pods.go:89] "kube-proxy-kr525" [8b374900-b35c-42e1-8757-ce142b1cf04d] Running
	I1216 19:56:21.441679  103685 system_pods.go:89] "kube-proxy-pvlrj" [d5fc0309-78bb-42b3-a61f-82c5d4d9069e] Running
	I1216 19:56:21.441684  103685 system_pods.go:89] "kube-proxy-wmg6k" [6d50b21a-c351-47e2-9abd-9fcca1423aff] Running
	I1216 19:56:21.441687  103685 system_pods.go:89] "kube-proxy-x7xbp" [ce0d4ca6-fbc9-4f2f-996d-5bd01b41a14f] Running
	I1216 19:56:21.441692  103685 system_pods.go:89] "kube-scheduler-ha-082404" [acddb3d3-c314-439a-92db-316e5150ca22] Running
	I1216 19:56:21.441696  103685 system_pods.go:89] "kube-scheduler-ha-082404-m02" [3f0e8aae-a325-49d7-b616-4aee03dcca94] Running
	I1216 19:56:21.441701  103685 system_pods.go:89] "kube-scheduler-ha-082404-m03" [71f272ed-20a7-4ed3-a16b-7622af2210a2] Running
	I1216 19:56:21.441715  103685 system_pods.go:89] "kube-vip-ha-082404" [c70c2ea8-8fce-4883-b4bd-ac4b0f3a285d] Running
	I1216 19:56:21.441728  103685 system_pods.go:89] "kube-vip-ha-082404-m02" [d6f98a08-2873-48c7-9fd3-2b4b5cfb6154] Running
	I1216 19:56:21.441738  103685 system_pods.go:89] "kube-vip-ha-082404-m03" [de516177-52d7-4f79-9681-8090670d31da] Running
	I1216 19:56:21.441750  103685 system_pods.go:89] "storage-provisioner" [3c0d0135-4746-4b03-9877-d30c5297116e] Running
	I1216 19:56:21.441761  103685 system_pods.go:126] duration metric: took 211.720634ms to wait for k8s-apps to be running ...
	I1216 19:56:21.441770  103685 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 19:56:21.441956  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:56:21.454732  103685 system_svc.go:56] duration metric: took 12.954103ms WaitForService to wait for kubelet
	I1216 19:56:21.454802  103685 kubeadm.go:582] duration metric: took 56.602815114s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 19:56:21.454828  103685 node_conditions.go:102] verifying NodePressure condition ...
	I1216 19:56:21.625163  103685 request.go:632] Waited for 170.24218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1216 19:56:21.625242  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1216 19:56:21.625253  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:21.625263  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:21.625267  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:21.631484  103685 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1216 19:56:21.632990  103685 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 19:56:21.633025  103685 node_conditions.go:123] node cpu capacity is 2
	I1216 19:56:21.633038  103685 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 19:56:21.633044  103685 node_conditions.go:123] node cpu capacity is 2
	I1216 19:56:21.633049  103685 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 19:56:21.633077  103685 node_conditions.go:123] node cpu capacity is 2
	I1216 19:56:21.633091  103685 node_conditions.go:105] duration metric: took 178.255363ms to run NodePressure ...
	I1216 19:56:21.633104  103685 start.go:241] waiting for startup goroutines ...
	I1216 19:56:21.633132  103685 start.go:255] writing updated cluster config ...
	I1216 19:56:21.636346  103685 out.go:201] 
	I1216 19:56:21.639287  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:56:21.639451  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:56:21.642484  103685 out.go:177] * Starting "ha-082404-m04" worker node in "ha-082404" cluster
	I1216 19:56:21.645892  103685 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 19:56:21.648455  103685 out.go:177] * Pulling base image v0.0.45-1734029593-20090 ...
	I1216 19:56:21.651018  103685 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 19:56:21.651050  103685 cache.go:56] Caching tarball of preloaded images
	I1216 19:56:21.651093  103685 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1216 19:56:21.651158  103685 preload.go:172] Found /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1216 19:56:21.651169  103685 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on docker
	I1216 19:56:21.651307  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:56:21.672354  103685 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon, skipping pull
	I1216 19:56:21.672374  103685 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in daemon, skipping load
	I1216 19:56:21.672387  103685 cache.go:194] Successfully downloaded all kic artifacts
	I1216 19:56:21.672411  103685 start.go:360] acquireMachinesLock for ha-082404-m04: {Name:mkbfef421b2e38a6e5e4a7c28eb280c84a721335 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 19:56:21.672578  103685 start.go:364] duration metric: took 92.945µs to acquireMachinesLock for "ha-082404-m04"
	I1216 19:56:21.672632  103685 start.go:96] Skipping create...Using existing machine configuration
	I1216 19:56:21.672638  103685 fix.go:54] fixHost starting: m04
	I1216 19:56:21.673035  103685 cli_runner.go:164] Run: docker container inspect ha-082404-m04 --format={{.State.Status}}
	I1216 19:56:21.690725  103685 fix.go:112] recreateIfNeeded on ha-082404-m04: state=Stopped err=<nil>
	W1216 19:56:21.690752  103685 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 19:56:21.693800  103685 out.go:177] * Restarting existing docker container for "ha-082404-m04" ...
	I1216 19:56:21.696457  103685 cli_runner.go:164] Run: docker start ha-082404-m04
	I1216 19:56:22.039396  103685 cli_runner.go:164] Run: docker container inspect ha-082404-m04 --format={{.State.Status}}
	I1216 19:56:22.063250  103685 kic.go:430] container "ha-082404-m04" state is running.
	I1216 19:56:22.063644  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m04
	I1216 19:56:22.087920  103685 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/config.json ...
	I1216 19:56:22.088171  103685 machine.go:93] provisionDockerMachine start ...
	I1216 19:56:22.088231  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:22.111363  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:56:22.111806  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1216 19:56:22.111816  103685 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 19:56:22.112722  103685 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 19:56:25.277547  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-082404-m04
	
	I1216 19:56:25.277573  103685 ubuntu.go:169] provisioning hostname "ha-082404-m04"
	I1216 19:56:25.277636  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:25.298920  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:56:25.299177  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1216 19:56:25.299195  103685 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-082404-m04 && echo "ha-082404-m04" | sudo tee /etc/hostname
	I1216 19:56:25.480137  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-082404-m04
	
	I1216 19:56:25.480216  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:25.498886  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:56:25.499131  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1216 19:56:25.499154  103685 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-082404-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-082404-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-082404-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 19:56:25.650093  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 19:56:25.650174  103685 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20091-2258/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-2258/.minikube}
	I1216 19:56:25.650212  103685 ubuntu.go:177] setting up certificates
	I1216 19:56:25.650241  103685 provision.go:84] configureAuth start
	I1216 19:56:25.650320  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m04
	I1216 19:56:25.669571  103685 provision.go:143] copyHostCerts
	I1216 19:56:25.669608  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem
	I1216 19:56:25.669640  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem, removing ...
	I1216 19:56:25.669646  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem
	I1216 19:56:25.669798  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/ca.pem (1082 bytes)
	I1216 19:56:25.669984  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem
	I1216 19:56:25.670007  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem, removing ...
	I1216 19:56:25.670012  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem
	I1216 19:56:25.670108  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/cert.pem (1123 bytes)
	I1216 19:56:25.670248  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem
	I1216 19:56:25.670276  103685 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem, removing ...
	I1216 19:56:25.670281  103685 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem
	I1216 19:56:25.670397  103685 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-2258/.minikube/key.pem (1675 bytes)
	I1216 19:56:25.670502  103685 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem org=jenkins.ha-082404-m04 san=[127.0.0.1 192.168.49.5 ha-082404-m04 localhost minikube]
	I1216 19:56:25.951558  103685 provision.go:177] copyRemoteCerts
	I1216 19:56:25.951696  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 19:56:25.951768  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:25.978960  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m04/id_rsa Username:docker}
	I1216 19:56:26.088131  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1216 19:56:26.088197  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 19:56:26.116742  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1216 19:56:26.116808  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 19:56:26.143381  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1216 19:56:26.143448  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 19:56:26.171661  103685 provision.go:87] duration metric: took 521.391251ms to configureAuth
	I1216 19:56:26.171689  103685 ubuntu.go:193] setting minikube options for container-runtime
	I1216 19:56:26.171926  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:56:26.171989  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:26.192881  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:56:26.193116  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1216 19:56:26.193128  103685 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 19:56:26.343116  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 19:56:26.343140  103685 ubuntu.go:71] root file system type: overlay
	I1216 19:56:26.343297  103685 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 19:56:26.343369  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:26.362478  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:56:26.362828  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1216 19:56:26.362919  103685 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 19:56:26.528065  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 19:56:26.528244  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:26.547382  103685 main.go:141] libmachine: Using SSH client type: native
	I1216 19:56:26.547644  103685 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I1216 19:56:26.547668  103685 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 19:56:27.503593  103685 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-12-16 19:53:45.448092769 +0000
	+++ /lib/systemd/system/docker.service.new	2024-12-16 19:56:26.523170095 +0000
	@@ -14,7 +14,6 @@
	 
	 Environment=NO_PROXY=192.168.49.2
	 Environment=NO_PROXY=192.168.49.2,192.168.49.3
	-Environment=NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 19:56:27.503684  103685 machine.go:96] duration metric: took 5.41550101s to provisionDockerMachine
	I1216 19:56:27.503712  103685 start.go:293] postStartSetup for "ha-082404-m04" (driver="docker")
	I1216 19:56:27.503752  103685 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 19:56:27.503867  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 19:56:27.503936  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:27.526130  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m04/id_rsa Username:docker}
	I1216 19:56:27.632347  103685 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 19:56:27.636258  103685 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 19:56:27.636292  103685 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 19:56:27.636303  103685 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 19:56:27.636310  103685 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 19:56:27.636320  103685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-2258/.minikube/addons for local assets ...
	I1216 19:56:27.636378  103685 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-2258/.minikube/files for local assets ...
	I1216 19:56:27.636447  103685 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> 75692.pem in /etc/ssl/certs
	I1216 19:56:27.636454  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> /etc/ssl/certs/75692.pem
	I1216 19:56:27.636554  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 19:56:27.646385  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem --> /etc/ssl/certs/75692.pem (1708 bytes)
	I1216 19:56:27.689390  103685 start.go:296] duration metric: took 185.636209ms for postStartSetup
	I1216 19:56:27.689513  103685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:56:27.689583  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:27.710027  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m04/id_rsa Username:docker}
	I1216 19:56:27.815063  103685 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 19:56:27.819956  103685 fix.go:56] duration metric: took 6.147312259s for fixHost
	I1216 19:56:27.819986  103685 start.go:83] releasing machines lock for "ha-082404-m04", held for 6.14736998s
	I1216 19:56:27.820063  103685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m04
	I1216 19:56:27.840254  103685 out.go:177] * Found network options:
	I1216 19:56:27.842884  103685 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1216 19:56:27.845382  103685 proxy.go:119] fail to check proxy env: Error ip not in block
	W1216 19:56:27.845403  103685 proxy.go:119] fail to check proxy env: Error ip not in block
	W1216 19:56:27.845426  103685 proxy.go:119] fail to check proxy env: Error ip not in block
	W1216 19:56:27.845442  103685 proxy.go:119] fail to check proxy env: Error ip not in block
	I1216 19:56:27.845515  103685 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 19:56:27.845560  103685 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 19:56:27.845617  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:27.845563  103685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:56:27.864348  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m04/id_rsa Username:docker}
	I1216 19:56:27.867303  103685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m04/id_rsa Username:docker}
	I1216 19:56:28.105016  103685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1216 19:56:28.128883  103685 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1216 19:56:28.128969  103685 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 19:56:28.139463  103685 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 19:56:28.139492  103685 start.go:495] detecting cgroup driver to use...
	I1216 19:56:28.139530  103685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 19:56:28.139623  103685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 19:56:28.157360  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1216 19:56:28.168032  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 19:56:28.178280  103685 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 19:56:28.178373  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 19:56:28.189758  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 19:56:28.201408  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 19:56:28.211923  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 19:56:28.233612  103685 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 19:56:28.245064  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 19:56:28.255387  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 19:56:28.265806  103685 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 19:56:28.276335  103685 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 19:56:28.292378  103685 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 19:56:28.301286  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:56:28.396540  103685 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 19:56:28.510463  103685 start.go:495] detecting cgroup driver to use...
	I1216 19:56:28.510535  103685 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 19:56:28.510627  103685 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 19:56:28.537200  103685 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1216 19:56:28.537301  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 19:56:28.553279  103685 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 19:56:28.575858  103685 ssh_runner.go:195] Run: which cri-dockerd
	I1216 19:56:28.581378  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 19:56:28.593589  103685 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1216 19:56:28.619251  103685 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 19:56:28.772603  103685 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 19:56:28.900292  103685 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 19:56:28.900376  103685 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 19:56:28.933668  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:56:29.072729  103685 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 19:56:29.508427  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 19:56:29.523686  103685 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 19:56:29.541797  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 19:56:29.555024  103685 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 19:56:29.655142  103685 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 19:56:29.756812  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:56:29.857268  103685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 19:56:29.873344  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 19:56:29.887508  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:56:29.996934  103685 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 19:56:30.146562  103685 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 19:56:30.146647  103685 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 19:56:30.152683  103685 start.go:563] Will wait 60s for crictl version
	I1216 19:56:30.152762  103685 ssh_runner.go:195] Run: which crictl
	I1216 19:56:30.158324  103685 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 19:56:30.205924  103685 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I1216 19:56:30.206044  103685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 19:56:30.237938  103685 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 19:56:30.268323  103685 out.go:235] * Preparing Kubernetes v1.32.0 on Docker 27.4.0 ...
	I1216 19:56:30.270925  103685 out.go:177]   - env NO_PROXY=192.168.49.2
	I1216 19:56:30.273691  103685 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1216 19:56:30.276272  103685 cli_runner.go:164] Run: docker network inspect ha-082404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 19:56:30.297951  103685 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 19:56:30.302947  103685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:56:30.319975  103685 mustload.go:65] Loading cluster: ha-082404
	I1216 19:56:30.320281  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:56:30.320640  103685 cli_runner.go:164] Run: docker container inspect ha-082404 --format={{.State.Status}}
	I1216 19:56:30.340187  103685 host.go:66] Checking if "ha-082404" exists ...
	I1216 19:56:30.340492  103685 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404 for IP: 192.168.49.5
	I1216 19:56:30.340508  103685 certs.go:194] generating shared ca certs ...
	I1216 19:56:30.340525  103685 certs.go:226] acquiring lock for ca certs: {Name:mk61ac4ce13eccd2c732f8ba869cb043f9f7a744 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:56:30.340650  103685 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key
	I1216 19:56:30.340696  103685 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key
	I1216 19:56:30.340711  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 19:56:30.340726  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1216 19:56:30.340743  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 19:56:30.340757  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 19:56:30.340824  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem (1338 bytes)
	W1216 19:56:30.340862  103685 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569_empty.pem, impossibly tiny 0 bytes
	I1216 19:56:30.340877  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 19:56:30.340906  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/ca.pem (1082 bytes)
	I1216 19:56:30.340935  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/cert.pem (1123 bytes)
	I1216 19:56:30.340962  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/key.pem (1675 bytes)
	I1216 19:56:30.341022  103685 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem (1708 bytes)
	I1216 19:56:30.341061  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem -> /usr/share/ca-certificates/75692.pem
	I1216 19:56:30.341082  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:56:30.341101  103685 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem -> /usr/share/ca-certificates/7569.pem
	I1216 19:56:30.341130  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 19:56:30.373731  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 19:56:30.399959  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 19:56:30.424509  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 19:56:30.450525  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/ssl/certs/75692.pem --> /usr/share/ca-certificates/75692.pem (1708 bytes)
	I1216 19:56:30.478017  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 19:56:30.502499  103685 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-2258/.minikube/certs/7569.pem --> /usr/share/ca-certificates/7569.pem (1338 bytes)
	I1216 19:56:30.529293  103685 ssh_runner.go:195] Run: openssl version
	I1216 19:56:30.535068  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75692.pem && ln -fs /usr/share/ca-certificates/75692.pem /etc/ssl/certs/75692.pem"
	I1216 19:56:30.544829  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75692.pem
	I1216 19:56:30.548643  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/75692.pem
	I1216 19:56:30.548736  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75692.pem
	I1216 19:56:30.555655  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75692.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 19:56:30.565240  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 19:56:30.575984  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:56:30.579780  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:56:30.579853  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:56:30.587436  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 19:56:30.596934  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7569.pem && ln -fs /usr/share/ca-certificates/7569.pem /etc/ssl/certs/7569.pem"
	I1216 19:56:30.607031  103685 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7569.pem
	I1216 19:56:30.610989  103685 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/7569.pem
	I1216 19:56:30.611094  103685 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7569.pem
	I1216 19:56:30.618258  103685 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7569.pem /etc/ssl/certs/51391683.0"
	I1216 19:56:30.627658  103685 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 19:56:30.631321  103685 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 19:56:30.631362  103685 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.32.0  false true} ...
	I1216 19:56:30.631453  103685 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-082404-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:ha-082404 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 19:56:30.631526  103685 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 19:56:30.641050  103685 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 19:56:30.641174  103685 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1216 19:56:30.650273  103685 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 19:56:30.674390  103685 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 19:56:30.696364  103685 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1216 19:56:30.700898  103685 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:56:30.712441  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:56:30.810974  103685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 19:56:30.824547  103685 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.32.0 ContainerRuntime: ControlPlane:false Worker:true}
	I1216 19:56:30.824958  103685 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:56:30.827965  103685 out.go:177] * Verifying Kubernetes components...
	I1216 19:56:30.830655  103685 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:56:30.916428  103685 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 19:56:30.928719  103685 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:56:30.928989  103685 kapi.go:59] client config for ha-082404: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.crt", KeyFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/profiles/ha-082404/client.key", CAFile:"/home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1eafe20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1216 19:56:30.929049  103685 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1216 19:56:30.929250  103685 node_ready.go:35] waiting up to 6m0s for node "ha-082404-m04" to be "Ready" ...
	I1216 19:56:30.929321  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m04
	I1216 19:56:30.929333  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.929341  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.929350  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.932322  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.932850  103685 node_ready.go:49] node "ha-082404-m04" has status "Ready":"True"
	I1216 19:56:30.932872  103685 node_ready.go:38] duration metric: took 3.605533ms for node "ha-082404-m04" to be "Ready" ...
	I1216 19:56:30.932884  103685 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 19:56:30.932955  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1216 19:56:30.932966  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.932974  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.932978  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.937924  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:30.945060  103685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9th4p" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.945203  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-9th4p
	I1216 19:56:30.945243  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.945263  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.945269  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.948167  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.949205  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:30.949226  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.949236  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.949240  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.951762  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.952566  103685 pod_ready.go:93] pod "coredns-668d6bf9bc-9th4p" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:30.952590  103685 pod_ready.go:82] duration metric: took 7.489401ms for pod "coredns-668d6bf9bc-9th4p" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.952603  103685 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mwl2r" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.952665  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-mwl2r
	I1216 19:56:30.952676  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.952684  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.952688  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.955524  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.956533  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:30.956552  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.956562  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.956567  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.959332  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.959840  103685 pod_ready.go:93] pod "coredns-668d6bf9bc-mwl2r" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:30.959861  103685 pod_ready.go:82] duration metric: took 7.251073ms for pod "coredns-668d6bf9bc-mwl2r" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.959874  103685 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.959944  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-082404
	I1216 19:56:30.959954  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.959965  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.959969  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.962649  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.963318  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:30.963337  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.963348  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.963355  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.965764  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.966478  103685 pod_ready.go:93] pod "etcd-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:30.966500  103685 pod_ready.go:82] duration metric: took 6.611412ms for pod "etcd-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.966515  103685 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.966628  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-082404-m02
	I1216 19:56:30.966639  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.966648  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.966655  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.969198  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.969950  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:30.969971  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:30.969980  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:30.969984  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:30.972357  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:30.973012  103685 pod_ready.go:93] pod "etcd-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:30.973032  103685 pod_ready.go:82] duration metric: took 6.484236ms for pod "etcd-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:30.973055  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:31.130296  103685 request.go:632] Waited for 157.170563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404
	I1216 19:56:31.130393  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404
	I1216 19:56:31.130411  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:31.130425  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:31.130438  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:31.133748  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:31.330290  103685 request.go:632] Waited for 195.221105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:31.330382  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:31.330395  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:31.330404  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:31.330409  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:31.333313  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:31.334067  103685 pod_ready.go:93] pod "kube-apiserver-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:31.334087  103685 pod_ready.go:82] duration metric: took 361.020589ms for pod "kube-apiserver-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:31.334100  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:31.530221  103685 request.go:632] Waited for 196.046032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404-m02
	I1216 19:56:31.530308  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-082404-m02
	I1216 19:56:31.530337  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:31.530364  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:31.530377  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:31.534850  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:31.729520  103685 request.go:632] Waited for 192.172486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:31.729609  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:31.729630  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:31.729639  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:31.729644  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:31.734424  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:31.735635  103685 pod_ready.go:93] pod "kube-apiserver-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:31.735658  103685 pod_ready.go:82] duration metric: took 401.549615ms for pod "kube-apiserver-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:31.735672  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:31.929593  103685 request.go:632] Waited for 193.855761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:56:31.929652  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404
	I1216 19:56:31.929665  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:31.929674  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:31.929689  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:31.932582  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:32.129682  103685 request.go:632] Waited for 196.303669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:32.129758  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:32.129767  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:32.129775  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:32.129782  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:32.132627  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:32.133318  103685 pod_ready.go:93] pod "kube-controller-manager-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:32.133341  103685 pod_ready.go:82] duration metric: took 397.660252ms for pod "kube-controller-manager-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:32.133354  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:32.329739  103685 request.go:632] Waited for 196.317208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:32.329924  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-082404-m02
	I1216 19:56:32.329937  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:32.329946  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:32.329951  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:32.332733  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:32.529889  103685 request.go:632] Waited for 196.345047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:32.529950  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:32.529958  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:32.529968  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:32.529975  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:32.532947  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:32.533534  103685 pod_ready.go:93] pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:32.533558  103685 pod_ready.go:82] duration metric: took 400.195143ms for pod "kube-controller-manager-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:32.533572  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pvlrj" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:32.729968  103685 request.go:632] Waited for 196.332805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvlrj
	I1216 19:56:32.730036  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pvlrj
	I1216 19:56:32.730047  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:32.730056  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:32.730061  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:32.736523  103685 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1216 19:56:32.929677  103685 request.go:632] Waited for 191.336116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m04
	I1216 19:56:32.929755  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m04
	I1216 19:56:32.929768  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:32.929777  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:32.929783  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:32.932651  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:32.933187  103685 pod_ready.go:93] pod "kube-proxy-pvlrj" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:32.933204  103685 pod_ready.go:82] duration metric: took 399.624591ms for pod "kube-proxy-pvlrj" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:32.933217  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wmg6k" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:33.130202  103685 request.go:632] Waited for 196.921444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wmg6k
	I1216 19:56:33.130264  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wmg6k
	I1216 19:56:33.130274  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:33.130285  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:33.130294  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:33.133319  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:33.330212  103685 request.go:632] Waited for 196.149495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:33.330270  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:33.330275  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:33.330285  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:33.330289  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:33.333034  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:33.333584  103685 pod_ready.go:93] pod "kube-proxy-wmg6k" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:33.333603  103685 pod_ready.go:82] duration metric: took 400.378734ms for pod "kube-proxy-wmg6k" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:33.333635  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x7xbp" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:33.529578  103685 request.go:632] Waited for 195.874517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7xbp
	I1216 19:56:33.529657  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7xbp
	I1216 19:56:33.529668  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:33.529684  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:33.529693  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:33.533017  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:33.730207  103685 request.go:632] Waited for 196.259532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:33.730324  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:33.730355  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:33.730378  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:33.730396  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:33.733058  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:33.733569  103685 pod_ready.go:93] pod "kube-proxy-x7xbp" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:33.733584  103685 pod_ready.go:82] duration metric: took 399.940134ms for pod "kube-proxy-x7xbp" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:33.733595  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:33.929513  103685 request.go:632] Waited for 195.851822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404
	I1216 19:56:33.929572  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404
	I1216 19:56:33.929596  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:33.929614  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:33.929623  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:33.932575  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:34.129623  103685 request.go:632] Waited for 196.140859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:34.129677  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404
	I1216 19:56:34.129687  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:34.129702  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:34.129725  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:34.134442  103685 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1216 19:56:34.135275  103685 pod_ready.go:93] pod "kube-scheduler-ha-082404" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:34.135297  103685 pod_ready.go:82] duration metric: took 401.693834ms for pod "kube-scheduler-ha-082404" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:34.135311  103685 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:34.329710  103685 request.go:632] Waited for 194.325683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404-m02
	I1216 19:56:34.329851  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-082404-m02
	I1216 19:56:34.329885  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:34.329903  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:34.329909  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:34.333272  103685 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1216 19:56:34.530313  103685 request.go:632] Waited for 196.337523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:34.530388  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-082404-m02
	I1216 19:56:34.530402  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:34.530412  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:34.530422  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:34.533185  103685 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1216 19:56:34.533763  103685 pod_ready.go:93] pod "kube-scheduler-ha-082404-m02" in "kube-system" namespace has status "Ready":"True"
	I1216 19:56:34.533786  103685 pod_ready.go:82] duration metric: took 398.4661ms for pod "kube-scheduler-ha-082404-m02" in "kube-system" namespace to be "Ready" ...
	I1216 19:56:34.533801  103685 pod_ready.go:39] duration metric: took 3.600902057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 19:56:34.533874  103685 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 19:56:34.533945  103685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:56:34.546365  103685 system_svc.go:56] duration metric: took 12.483958ms WaitForService to wait for kubelet
	I1216 19:56:34.546412  103685 kubeadm.go:582] duration metric: took 3.721819048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 19:56:34.546431  103685 node_conditions.go:102] verifying NodePressure condition ...
	I1216 19:56:34.729743  103685 request.go:632] Waited for 183.181609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1216 19:56:34.729803  103685 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1216 19:56:34.729812  103685 round_trippers.go:469] Request Headers:
	I1216 19:56:34.729854  103685 round_trippers.go:473]     Accept: application/json, */*
	I1216 19:56:34.729861  103685 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1216 19:56:34.737711  103685 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1216 19:56:34.739565  103685 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 19:56:34.739591  103685 node_conditions.go:123] node cpu capacity is 2
	I1216 19:56:34.739603  103685 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 19:56:34.739607  103685 node_conditions.go:123] node cpu capacity is 2
	I1216 19:56:34.739612  103685 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 19:56:34.739616  103685 node_conditions.go:123] node cpu capacity is 2
	I1216 19:56:34.739621  103685 node_conditions.go:105] duration metric: took 193.18517ms to run NodePressure ...
	I1216 19:56:34.739633  103685 start.go:241] waiting for startup goroutines ...
	I1216 19:56:34.739659  103685 start.go:255] writing updated cluster config ...
	I1216 19:56:34.739983  103685 ssh_runner.go:195] Run: rm -f paused
	I1216 19:56:34.807887  103685 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 19:56:34.812506  103685 out.go:177] * Done! kubectl is now configured to use "ha-082404" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 16 19:54:58 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:54:58Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-9th4p_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9128e494ac0e7020e201e9ac5f60e8e2183d7f1e544435b7726145dba865da7e\""
	Dec 16 19:54:58 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:54:58Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-9th4p_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2bea468deff55fcec1295d54e34efbd26f13e5cbe963bf7e3b7c5c4e606cd6db\""
	Dec 16 19:54:58 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:54:58Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-mwl2r_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"30043c00e646a7738d6e6b290738a97385c910e4be7c7a7368f2e561ae255ed1\""
	Dec 16 19:54:58 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:54:58Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-mwl2r_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"adc0d3262bbd118238467f291bd8651bbbf2e033337dc9f63de6f42467df04a9\""
	Dec 16 19:54:59 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:54:59Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-f7kww_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"104dc58f833120481c3c51f493e3ba3c65ddb41c211aa000892ca3213fc51173\""
	Dec 16 19:54:59 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:54:59Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-f7kww_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bed50a955af114bb190915894779d531b23cc725e97cdfe5c818496e9d1c6773\""
	Dec 16 19:55:00 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-f7kww_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"104dc58f833120481c3c51f493e3ba3c65ddb41c211aa000892ca3213fc51173\""
	Dec 16 19:55:00 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-9th4p_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9128e494ac0e7020e201e9ac5f60e8e2183d7f1e544435b7726145dba865da7e\""
	Dec 16 19:55:00 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:00Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-mwl2r_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"30043c00e646a7738d6e6b290738a97385c910e4be7c7a7368f2e561ae255ed1\""
	Dec 16 19:55:04 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:04Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"0959a4005a36568ca29c3f55a632fea8af2d4c86250c1a77542833ba20528be2\". Proceed without further sandbox information."
	Dec 16 19:55:04 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:04Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"b683c7cd1f99e265fbe6ea7f92bf4a0e7f6831104d4844d765c4fd18477ae8fb\". Proceed without further sandbox information."
	Dec 16 19:55:05 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:05Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"4de0630835e4c8f083af020fd98362560fd03c0b304ac1b981e903aac6038d75\". Proceed without further sandbox information."
	Dec 16 19:55:05 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8fa66e19e5c7ae5346465d3c449f010408ba7d6ecc7061affd3258fcfd2e159/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:05 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/303e373747b455919bbf27dc34a3e802c07f35418d3aecf6727a960e9492a202/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	Dec 16 19:55:05 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c802c6e18bab9dab186b7bdba4604f511326b7e6a42dec6b6ee13bbe10891cdb/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:05 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c9be92fc010aef94352c80be37841d9105ba84c31eecd151c832c1dab2bb939/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:06 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8082d07116f548d4e5565fdb6083dd0a9abe240f5905a8d22979ade61fbcda3b/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:21 ha-082404 dockerd[1061]: time="2024-12-16T19:55:21.455414133Z" level=info msg="ignoring event" container=83f8881a95f7fe902eeeace12861c5eada335956b6149c5a2e9b9fbb28a63be2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 16 19:55:26 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Dec 16 19:55:27 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a03474977bba9e8792837697582177debe4fb2f7a9e9c86390cd26f3eb41ca39/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:27 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0d96b49f74b86a805dcc2bfda49c0da56e579ecf8e81578ddc65d236eb470cf2/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:27 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3826d0f3230e1a637323bfd3f4ea6c478c142137b6e3857703539d41567fc8c6/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:27 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c876d42918e89f7a13d5a2c502a12f219a10bdb241b1d098457a6d766bb9335/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Dec 16 19:55:27 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/27e8c579574dec5e7d0f21f871efaa6010a74a84e9074711d9b412aece8a377e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 16 19:55:27 ha-082404 cri-dockerd[1371]: time="2024-12-16T19:55:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b135e0ba00451bea659f1cf1be88e708b928feb21010a004a2a660e8dbb8e716/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	893c11024e002       2f6c962e7b831       27 seconds ago       Running             coredns                   4                   b135e0ba00451       coredns-668d6bf9bc-9th4p
	617f21ce9a32c       2f6c962e7b831       32 seconds ago       Running             coredns                   4                   3826d0f3230e1       coredns-668d6bf9bc-mwl2r
	d5a5339e5c899       2be0bcf609c65       41 seconds ago       Running             kindnet-cni               2                   a03474977bba9       kindnet-8nzqx
	fdaa9da785d37       ba04bb24b9575       42 seconds ago       Running             storage-provisioner       4                   0d96b49f74b86       storage-provisioner
	3ddf05914659e       89a35e2ebb6b9       43 seconds ago       Running             busybox                   2                   27e8c579574de       busybox-58667487b6-f7kww
	6d70bd91fd793       2f50386e20bfd       44 seconds ago       Running             kube-proxy                2                   5c876d42918e8       kube-proxy-x7xbp
	35b1fb0a1945d       a8d049396f6b8       57 seconds ago       Running             kube-controller-manager   4                   303e373747b45       kube-controller-manager-ha-082404
	12cc9372bc39e       334f34d04b9fe       About a minute ago   Running             kube-vip                  1                   8082d07116f54       kube-vip-ha-082404
	d8ab2b1e58da5       c3ff26fb59f37       About a minute ago   Running             kube-scheduler            2                   9c9be92fc010a       kube-scheduler-ha-082404
	3a57c691a62b7       7fc9d4aa817aa       About a minute ago   Running             etcd                      2                   c802c6e18bab9       etcd-ha-082404
	83f8881a95f7f       a8d049396f6b8       About a minute ago   Exited              kube-controller-manager   3                   303e373747b45       kube-controller-manager-ha-082404
	938f2f755d1e1       2b5bd0f16085a       About a minute ago   Running             kube-apiserver            2                   e8fa66e19e5c7       kube-apiserver-ha-082404
	96368e60b6cd3       ba04bb24b9575       2 minutes ago        Exited              storage-provisioner       3                   eec313ef0a43a       storage-provisioner
	7c8e374b59119       2f50386e20bfd       4 minutes ago        Exited              kube-proxy                1                   740286e2648c8       kube-proxy-x7xbp
	fb3fa2313cf97       89a35e2ebb6b9       4 minutes ago        Exited              busybox                   1                   104dc58f83312       busybox-58667487b6-f7kww
	6a5762f475692       2f6c962e7b831       4 minutes ago        Exited              coredns                   3                   30043c00e646a       coredns-668d6bf9bc-mwl2r
	6210fc1a4717d       2f6c962e7b831       4 minutes ago        Exited              coredns                   3                   9128e494ac0e7       coredns-668d6bf9bc-9th4p
	1eb8182139986       2be0bcf609c65       4 minutes ago        Exited              kindnet-cni               1                   4f306fc870453       kindnet-8nzqx
	446adec279b35       c3ff26fb59f37       5 minutes ago        Exited              kube-scheduler            1                   79184126536be       kube-scheduler-ha-082404
	67b42087d2405       334f34d04b9fe       5 minutes ago        Exited              kube-vip                  0                   21fa012c971bd       kube-vip-ha-082404
	8396fdc657769       2b5bd0f16085a       5 minutes ago        Exited              kube-apiserver            1                   b9be5e90858c5       kube-apiserver-ha-082404
	cebe98bc67ce9       7fc9d4aa817aa       6 minutes ago        Exited              etcd                      1                   5f897aff3753b       etcd-ha-082404
	
	
	==> coredns [617f21ce9a32] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33624 - 23686 "HINFO IN 5946540837965325478.1939124093402533604. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023049173s
	
	
	==> coredns [6210fc1a4717] <==
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46502 - 35570 "HINFO IN 5699985272064913748.8046935519882549673. udp 57 false 512" - - 0 6.001566228s
	[ERROR] plugin/errors: 2 5699985272064913748.8046935519882549673. HINFO: read udp 10.244.0.3:57093->192.168.49.1:53: i/o timeout
	[INFO] 127.0.0.1:48527 - 14267 "HINFO IN 5699985272064913748.8046935519882549673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 4.030660446s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:55742 - 9085 "HINFO IN 5699985272064913748.8046935519882549673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.004828683s
	[INFO] 127.0.0.1:53595 - 8649 "HINFO IN 5699985272064913748.8046935519882549673. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.002796423s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1187239052]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 19:52:01.427) (total time: 30000ms):
	Trace[1187239052]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:52:31.428)
	Trace[1187239052]: [30.000725026s] [30.000725026s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[895270623]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 19:52:01.427) (total time: 30001ms):
	Trace[895270623]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:52:31.428)
	Trace[895270623]: [30.001121303s] [30.001121303s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1413824115]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 19:52:01.427) (total time: 30006ms):
	Trace[1413824115]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (19:52:31.434)
	Trace[1413824115]: [30.006827404s] [30.006827404s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a5762f47569] <==
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45746 - 10236 "HINFO IN 1388450221469688476.2952753411806229028. udp 57 false 512" - - 0 6.00172819s
	[ERROR] plugin/errors: 2 1388450221469688476.2952753411806229028. HINFO: read udp 10.244.0.4:48599->192.168.49.1:53: i/o timeout
	[INFO] 127.0.0.1:38502 - 11599 "HINFO IN 1388450221469688476.2952753411806229028. udp 57 false 512" NXDOMAIN qr,rd,ra 57 4.041385082s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:43462 - 4711 "HINFO IN 1388450221469688476.2952753411806229028. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.005573289s
	[INFO] 127.0.0.1:48789 - 38002 "HINFO IN 1388450221469688476.2952753411806229028. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003795173s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[656955583]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 19:52:01.425) (total time: 30006ms):
	Trace[656955583]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30006ms (19:52:31.431)
	Trace[656955583]: [30.006407337s] [30.006407337s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[407629884]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 19:52:01.425) (total time: 30001ms):
	Trace[407629884]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:52:31.426)
	Trace[407629884]: [30.001502597s] [30.001502597s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2005610125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 19:52:01.424) (total time: 30004ms):
	Trace[2005610125]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:52:31.426)
	Trace[2005610125]: [30.004141168s] [30.004141168s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [893c11024e00] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55418 - 8032 "HINFO IN 2239110289631467918.6529411791828990824. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012461126s
	
	
	==> describe nodes <==
	Name:               ha-082404
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-082404
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=ha-082404
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T19_46_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 19:46:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-082404
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 19:56:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 19:55:26 +0000   Mon, 16 Dec 2024 19:46:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 19:55:26 +0000   Mon, 16 Dec 2024 19:46:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 19:55:26 +0000   Mon, 16 Dec 2024 19:46:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 19:55:26 +0000   Mon, 16 Dec 2024 19:46:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-082404
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b4782f958ee44199c9ef446eeb3cb98
	  System UUID:                5d4c52bd-6456-4b8b-b2a7-ff86570014a2
	  Boot ID:                    e1bb55ba-ca99-49c9-b685-77652a8efae1
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-f7kww             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 coredns-668d6bf9bc-9th4p             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-668d6bf9bc-mwl2r             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-082404                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-8nzqx                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-082404             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-082404    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-x7xbp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-082404             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-082404                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m24s                kube-proxy       
	  Normal   Starting                 43s                  kube-proxy       
	  Normal   Starting                 10m                  kube-proxy       
	  Warning  CgroupV1                 10m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m                  kubelet          Node ha-082404 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m                  kubelet          Node ha-082404 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                  kubelet          Node ha-082404 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 10m                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           10m                  node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	  Normal   RegisteredNode           9m34s                node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	  Normal   RegisteredNode           8m50s                node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	  Normal   NodeHasSufficientPID     6m6s (x7 over 6m6s)  kubelet          Node ha-082404 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m6s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 6m6s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m6s (x8 over 6m6s)  kubelet          Node ha-082404 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m6s (x8 over 6m6s)  kubelet          Node ha-082404 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m21s                node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	  Normal   RegisteredNode           5m10s                node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	  Normal   RegisteredNode           3m12s                node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	  Normal   Starting                 99s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 99s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  99s (x8 over 99s)    kubelet          Node ha-082404 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    99s (x8 over 99s)    kubelet          Node ha-082404 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     99s (x7 over 99s)    kubelet          Node ha-082404 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  99s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           53s                  node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	  Normal   RegisteredNode           25s                  node-controller  Node ha-082404 event: Registered Node ha-082404 in Controller
	
	
	Name:               ha-082404-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-082404-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=ha-082404
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_16T19_46_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 19:46:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-082404-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 19:56:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 19:55:28 +0000   Mon, 16 Dec 2024 19:46:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 19:55:28 +0000   Mon, 16 Dec 2024 19:46:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 19:55:28 +0000   Mon, 16 Dec 2024 19:46:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 19:55:28 +0000   Mon, 16 Dec 2024 19:46:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-082404-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2a12161850a4ea2a451399d7ccade2d
	  System UUID:                c3768833-08bd-45c4-8427-6f0c3f5b0998
	  Boot ID:                    e1bb55ba-ca99-49c9-b685-77652a8efae1
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-mdgdk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 etcd-ha-082404-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m46s
	  kube-system                 kindnet-p6stw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m47s
	  kube-system                 kube-apiserver-ha-082404-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 kube-controller-manager-ha-082404-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 kube-proxy-wmg6k                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 kube-scheduler-ha-082404-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 kube-vip-ha-082404-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   Starting                 4m47s                  kube-proxy       
	  Normal   Starting                 6m50s                  kube-proxy       
	  Normal   Starting                 9m37s                  kube-proxy       
	  Normal   NodeHasSufficientPID     9m48s (x7 over 9m48s)  kubelet          Node ha-082404-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  9m48s (x8 over 9m48s)  kubelet          Node ha-082404-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m48s (x8 over 9m48s)  kubelet          Node ha-082404-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m43s                  node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	  Normal   RegisteredNode           9m34s                  node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	  Normal   RegisteredNode           8m50s                  node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	  Normal   NodeHasSufficientPID     7m26s (x7 over 7m26s)  kubelet          Node ha-082404-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 7m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m26s (x8 over 7m26s)  kubelet          Node ha-082404-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m26s (x8 over 7m26s)  kubelet          Node ha-082404-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m4s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  6m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m3s (x8 over 6m4s)    kubelet          Node ha-082404-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m3s (x8 over 6m4s)    kubelet          Node ha-082404-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m3s (x7 over 6m4s)    kubelet          Node ha-082404-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m21s                  node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	  Normal   Starting                 96s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 96s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  96s (x8 over 96s)      kubelet          Node ha-082404-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s (x8 over 96s)      kubelet          Node ha-082404-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s (x7 over 96s)      kubelet          Node ha-082404-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  96s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           53s                    node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-082404-m02 event: Registered Node ha-082404-m02 in Controller
	
	
	Name:               ha-082404-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-082404-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=ha-082404
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_16T19_48_27_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 19:48:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-082404-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 19:56:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 19:56:36 +0000   Mon, 16 Dec 2024 19:56:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 19:56:36 +0000   Mon, 16 Dec 2024 19:56:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 19:56:36 +0000   Mon, 16 Dec 2024 19:56:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 19:56:36 +0000   Mon, 16 Dec 2024 19:56:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-082404-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 1f0b459793e944f5a162701add5f2897
	  System UUID:                ec1dc162-320c-4a8b-904d-db619d30c85c
	  Boot ID:                    e1bb55ba-ca99-49c9-b685-77652a8efae1
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-2bw6v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kindnet-m64xz               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m11s
	  kube-system                 kube-proxy-pvlrj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m8s                   kube-proxy       
	  Normal   Starting                 2m35s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    8m11s (x2 over 8m11s)  kubelet          Node ha-082404-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  8m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     8m11s (x2 over 8m11s)  kubelet          Node ha-082404-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 8m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m11s (x2 over 8m11s)  kubelet          Node ha-082404-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                8m10s                  kubelet          Node ha-082404-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m10s                  node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   RegisteredNode           8m9s                   node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   RegisteredNode           8m8s                   node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   RegisteredNode           5m21s                  node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   RegisteredNode           5m10s                  node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   NodeNotReady             4m30s                  node-controller  Node ha-082404-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   Starting                 2m56s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 2m55s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m49s (x7 over 2m55s)  kubelet          Node ha-082404-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m43s (x8 over 2m55s)  kubelet          Node ha-082404-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s (x8 over 2m55s)  kubelet          Node ha-082404-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           53s                    node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-082404-m04 event: Registered Node ha-082404-m04 in Controller
	  Normal   Starting                 14s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  14s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     8s (x7 over 14s)       kubelet          Node ha-082404-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             3s                     node-controller  Node ha-082404-m04 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  1s (x8 over 14s)       kubelet          Node ha-082404-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    1s (x8 over 14s)       kubelet          Node ha-082404-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Dec16 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.455673] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026726] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.031497] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017044] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.631590] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.594930] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [3a57c691a62b] <==
	{"level":"warn","ts":"2024-12-16T19:55:24.499623Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:23.576775Z","time spent":"922.836807ms","remote":"127.0.0.1:48584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":29,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"927.297414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 ","response":"range_response_count:12 size:8695"}
	{"level":"info","ts":"2024-12-16T19:55:24.499887Z","caller":"traceutil/trace.go:171","msg":"trace[1713466161] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:12; response_revision:2559; }","duration":"996.850972ms","start":"2024-12-16T19:55:23.503024Z","end":"2024-12-16T19:55:24.499875Z","steps":["trace[1713466161] 'agreement among raft nodes before linearized reading'  (duration: 927.252099ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.499957Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:23.502987Z","time spent":"996.937493ms","remote":"127.0.0.1:48604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":12,"response size":8719,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"938.925208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" limit:500 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:55:24.500194Z","caller":"traceutil/trace.go:171","msg":"trace[1043412968] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2559; }","duration":"1.008763893s","start":"2024-12-16T19:55:23.491419Z","end":"2024-12-16T19:55:24.500183Z","steps":["trace[1043412968] 'agreement among raft nodes before linearized reading'  (duration: 938.91432ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.500240Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:23.491384Z","time spent":"1.008845368s","remote":"127.0.0.1:48660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":29,"request content":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" limit:500 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"982.134204ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:55:24.525276Z","caller":"traceutil/trace.go:171","msg":"trace[1677327849] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:2559; }","duration":"1.077034411s","start":"2024-12-16T19:55:23.448228Z","end":"2024-12-16T19:55:24.525262Z","steps":["trace[1677327849] 'agreement among raft nodes before linearized reading'  (duration: 982.127697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.525349Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:23.448197Z","time spent":"1.077128603s","remote":"127.0.0.1:48760","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":29,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"992.220177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:500 ","response":"range_response_count:4 size:2585"}
	{"level":"info","ts":"2024-12-16T19:55:24.525509Z","caller":"traceutil/trace.go:171","msg":"trace[1080619265] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:4; response_revision:2559; }","duration":"1.087313494s","start":"2024-12-16T19:55:23.438188Z","end":"2024-12-16T19:55:24.525502Z","steps":["trace[1080619265] 'agreement among raft nodes before linearized reading'  (duration: 992.185478ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.525535Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:23.438159Z","time spent":"1.087366022s","remote":"127.0.0.1:48336","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":4,"response size":2609,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:500 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430450Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.188462634s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2024-12-16T19:55:24.525743Z","caller":"traceutil/trace.go:171","msg":"trace[1357142503] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:2559; }","duration":"1.283751375s","start":"2024-12-16T19:55:23.241984Z","end":"2024-12-16T19:55:24.525735Z","steps":["trace[1357142503] 'agreement among raft nodes before linearized reading'  (duration: 1.188438454s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.525770Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:23.241970Z","time spent":"1.283790209s","remote":"127.0.0.1:48628","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":466,"request content":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.188615679s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:55:24.529601Z","caller":"traceutil/trace.go:171","msg":"trace[1658249732] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:2559; }","duration":"1.287735747s","start":"2024-12-16T19:55:23.241849Z","end":"2024-12-16T19:55:24.529585Z","steps":["trace[1658249732] 'agreement among raft nodes before linearized reading'  (duration: 1.188608656s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.533718Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:23.241800Z","time spent":"1.29188782s","remote":"127.0.0.1:48608","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":29,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.616093415s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-lhvlhw2affhkalxzqnql7m47si\" limit:1 ","response":"range_response_count:1 size:688"}
	{"level":"info","ts":"2024-12-16T19:55:24.534154Z","caller":"traceutil/trace.go:171","msg":"trace[1358004048] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-lhvlhw2affhkalxzqnql7m47si; range_end:; response_count:1; response_revision:2559; }","duration":"1.719733938s","start":"2024-12-16T19:55:22.814405Z","end":"2024-12-16T19:55:24.534138Z","steps":["trace[1358004048] 'agreement among raft nodes before linearized reading'  (duration: 1.616070228s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.534188Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:22.814375Z","time spent":"1.719799659s","remote":"127.0.0.1:48542","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":712,"request content":"key:\"/registry/leases/kube-system/apiserver-lhvlhw2affhkalxzqnql7m47si\" limit:1 "}
	{"level":"warn","ts":"2024-12-16T19:55:24.430537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.809944161s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-12-16T19:55:24.534380Z","caller":"traceutil/trace.go:171","msg":"trace[1569502169] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:2559; }","duration":"1.913780362s","start":"2024-12-16T19:55:22.620589Z","end":"2024-12-16T19:55:24.534369Z","steps":["trace[1569502169] 'agreement among raft nodes before linearized reading'  (duration: 1.809921088s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:55:24.534405Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:55:22.620577Z","time spent":"1.913819212s","remote":"127.0.0.1:48272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":1,"response size":140,"request content":"key:\"/registry/ranges/serviceips\" limit:1 "}
	
	
	==> etcd [cebe98bc67ce] <==
	{"level":"warn","ts":"2024-12-16T19:54:39.405253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.109558006s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-12-16T19:54:39.405266Z","caller":"traceutil/trace.go:171","msg":"trace[1483849555] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; }","duration":"1.109571314s","start":"2024-12-16T19:54:38.295689Z","end":"2024-12-16T19:54:39.405261Z","steps":["trace[1483849555] 'agreement among raft nodes before linearized reading'  (duration: 1.10911642s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:54:39.405285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:54:38.295653Z","time spent":"1.109621553s","remote":"127.0.0.1:52278","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true "}
	2024/12/16 19:54:39 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-16T19:54:39.405317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.807622518s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-12-16T19:54:39.405337Z","caller":"traceutil/trace.go:171","msg":"trace[524508747] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"7.807642915s","start":"2024-12-16T19:54:31.597689Z","end":"2024-12-16T19:54:39.405332Z","steps":["trace[524508747] 'agreement among raft nodes before linearized reading'  (duration: 7.807124244s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:54:39.405351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:54:31.597637Z","time spent":"7.807709858s","remote":"127.0.0.1:52308","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" count_only:true "}
	2024/12/16 19:54:39 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-16T19:54:39.405385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.610930852s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-12-16T19:54:39.405406Z","caller":"traceutil/trace.go:171","msg":"trace[1517117004] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; }","duration":"9.610944734s","start":"2024-12-16T19:54:29.794449Z","end":"2024-12-16T19:54:39.405394Z","steps":["trace[1517117004] 'agreement among raft nodes before linearized reading'  (duration: 9.610371501s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:54:39.405420Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:54:29.794394Z","time spent":"9.611021302s","remote":"127.0.0.1:51990","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":0,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true "}
	2024/12/16 19:54:39 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-16T19:54:39.452176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T19:54:39.452224Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-16T19:54:39.452275Z","caller":"etcdserver/server.go:1534","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-12-16T19:54:39.452412Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e74003b2f6d37ab0"}
	{"level":"info","ts":"2024-12-16T19:54:39.452427Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e74003b2f6d37ab0"}
	{"level":"info","ts":"2024-12-16T19:54:39.452447Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e74003b2f6d37ab0"}
	{"level":"info","ts":"2024-12-16T19:54:39.452473Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"e74003b2f6d37ab0"}
	{"level":"info","ts":"2024-12-16T19:54:39.452525Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e74003b2f6d37ab0"}
	{"level":"info","ts":"2024-12-16T19:54:39.452562Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e74003b2f6d37ab0"}
	{"level":"info","ts":"2024-12-16T19:54:39.452572Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e74003b2f6d37ab0"}
	{"level":"info","ts":"2024-12-16T19:54:39.456203Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-16T19:54:39.456347Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-16T19:54:39.456375Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"ha-082404","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:56:37 up 39 min,  0 users,  load average: 2.31, 2.89, 2.41
	Linux ha-082404 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1eb818213998] <==
	I1216 19:54:08.919128       1 main.go:301] handling current node
	I1216 19:54:08.919147       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:54:08.919181       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	I1216 19:54:08.919390       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1216 19:54:08.919412       1 main.go:324] Node ha-082404-m03 has CIDR [10.244.2.0/24] 
	I1216 19:54:08.919596       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:54:08.919616       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	I1216 19:54:18.921207       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:54:18.921307       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	I1216 19:54:18.921519       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:54:18.921538       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	I1216 19:54:18.921708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 19:54:18.921724       1 main.go:301] handling current node
	I1216 19:54:28.919262       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:54:28.919299       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	I1216 19:54:28.919483       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:54:28.919494       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	I1216 19:54:28.919563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 19:54:28.919570       1 main.go:301] handling current node
	I1216 19:54:38.926805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 19:54:38.926872       1 main.go:301] handling current node
	I1216 19:54:38.926894       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:54:38.926901       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	I1216 19:54:38.934582       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:54:38.934621       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d5a5339e5c89] <==
	I1216 19:56:07.518650       1 main.go:301] handling current node
	I1216 19:56:07.522146       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:56:07.522182       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	I1216 19:56:07.522340       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1216 19:56:07.522413       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:56:07.522421       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	I1216 19:56:07.522472       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1216 19:56:17.518554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 19:56:17.518590       1 main.go:301] handling current node
	I1216 19:56:17.518607       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:56:17.518614       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	I1216 19:56:17.518906       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:56:17.518923       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	I1216 19:56:27.519682       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:56:27.519937       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	I1216 19:56:27.520207       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:56:27.520245       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	I1216 19:56:27.520360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 19:56:27.520394       1 main.go:301] handling current node
	I1216 19:56:37.518984       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1216 19:56:37.519018       1 main.go:324] Node ha-082404-m04 has CIDR [10.244.3.0/24] 
	I1216 19:56:37.519293       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 19:56:37.519312       1 main.go:301] handling current node
	I1216 19:56:37.519325       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1216 19:56:37.519332       1 main.go:324] Node ha-082404-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8396fdc65776] <==
	W1216 19:54:48.592065       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.601651       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.626584       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.630262       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.744659       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.759050       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.786973       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.792435       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.813269       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.844353       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.849794       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.897503       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.916387       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.925840       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:48.954290       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.017300       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.089791       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.176605       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.218401       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.296152       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.323988       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.377521       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.456849       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.461326       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 19:54:49.468892       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [938f2f755d1e] <==
	E1216 19:55:24.385229       1 cacher.go:478] cacher (resourcequotas): unexpected ListAndWatch error: failed to list *core.ResourceQuota: etcdserver: leader changed; reinitializing...
	E1216 19:55:24.385124       1 watcher.go:342] watch chan error: etcdserver: no leader
	I1216 19:55:24.451119       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 19:55:24.452147       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 19:55:24.452308       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 19:55:24.452391       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 19:55:24.483230       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1216 19:55:24.489578       1 cache.go:39] Caches are synced for autoregister controller
	I1216 19:55:24.541209       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 19:55:24.561152       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1216 19:55:24.561376       1 policy_source.go:240] refreshing policies
	I1216 19:55:24.564516       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 19:55:24.595425       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1216 19:55:24.619617       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1216 19:55:24.621787       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1216 19:55:24.621930       1 controller.go:615] quota admission added evaluator for: endpoints
	I1216 19:55:24.646764       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 19:55:24.703660       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1216 19:55:24.732216       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1216 19:55:24.849533       1 shared_informer.go:320] Caches are synced for configmaps
	W1216 19:55:25.347548       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1216 19:55:26.792490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1216 19:55:45.170646       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1216 19:55:45.281611       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1216 19:55:45.363145       1 controller.go:615] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [35b1fb0a1945] <==
	E1216 19:56:24.912210       1 gc_controller.go:151] "Failed to get node" err="node \"ha-082404-m03\" not found" logger="pod-garbage-collector-controller" node="ha-082404-m03"
	E1216 19:56:24.912219       1 gc_controller.go:151] "Failed to get node" err="node \"ha-082404-m03\" not found" logger="pod-garbage-collector-controller" node="ha-082404-m03"
	E1216 19:56:24.912231       1 gc_controller.go:151] "Failed to get node" err="node \"ha-082404-m03\" not found" logger="pod-garbage-collector-controller" node="ha-082404-m03"
	E1216 19:56:24.912239       1 gc_controller.go:151] "Failed to get node" err="node \"ha-082404-m03\" not found" logger="pod-garbage-collector-controller" node="ha-082404-m03"
	I1216 19:56:24.926811       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-082404-m03"
	I1216 19:56:24.969401       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-082404-m03"
	I1216 19:56:24.969432       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-082404-m03"
	I1216 19:56:25.029367       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-082404-m03"
	I1216 19:56:25.029624       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-f7n6r"
	I1216 19:56:25.067323       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-f7n6r"
	I1216 19:56:25.067573       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-082404-m03"
	I1216 19:56:25.102062       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-082404-m03"
	I1216 19:56:25.102092       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-082404-m03"
	I1216 19:56:25.158546       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-082404-m03"
	I1216 19:56:25.158579       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kr525"
	I1216 19:56:25.204077       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kr525"
	I1216 19:56:25.204305       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-082404-m03"
	I1216 19:56:25.261866       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-082404-m03"
	I1216 19:56:34.912839       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-082404-m04"
	I1216 19:56:34.944221       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-082404-m04"
	I1216 19:56:34.994389       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.613604ms"
	I1216 19:56:34.994508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="83.682µs"
	I1216 19:56:36.148753       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-082404-m04"
	I1216 19:56:36.148818       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-082404-m04"
	I1216 19:56:36.161548       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-082404-m04"
	
	
	==> kube-controller-manager [83f8881a95f7] <==
	I1216 19:55:08.421394       1 serving.go:386] Generated self-signed cert in-memory
	I1216 19:55:10.376435       1 controllermanager.go:185] "Starting" version="v1.32.0"
	I1216 19:55:10.376698       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:55:10.382278       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 19:55:10.382618       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 19:55:10.383341       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1216 19:55:10.383506       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 19:55:21.425586       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [6d70bd91fd79] <==
	I1216 19:55:53.916535       1 server_linux.go:66] "Using iptables proxy"
	I1216 19:55:54.007982       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 19:55:54.008076       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 19:55:54.032062       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 19:55:54.032273       1 server_linux.go:170] "Using iptables Proxier"
	I1216 19:55:54.034365       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 19:55:54.034931       1 server.go:497] "Version info" version="v1.32.0"
	I1216 19:55:54.034960       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:55:54.040067       1 config.go:329] "Starting node config controller"
	I1216 19:55:54.040092       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 19:55:54.040691       1 config.go:199] "Starting service config controller"
	I1216 19:55:54.040714       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 19:55:54.040820       1 config.go:105] "Starting endpoint slice config controller"
	I1216 19:55:54.040839       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 19:55:54.140809       1 shared_informer.go:320] Caches are synced for service config
	I1216 19:55:54.140961       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 19:55:54.141106       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7c8e374b5911] <==
	I1216 19:52:12.330957       1 server_linux.go:66] "Using iptables proxy"
	I1216 19:52:12.437917       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 19:52:12.438065       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 19:52:12.457169       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 19:52:12.457233       1 server_linux.go:170] "Using iptables Proxier"
	I1216 19:52:12.459160       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 19:52:12.459581       1 server.go:497] "Version info" version="v1.32.0"
	I1216 19:52:12.459607       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:52:12.468636       1 config.go:199] "Starting service config controller"
	I1216 19:52:12.468666       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 19:52:12.468690       1 config.go:105] "Starting endpoint slice config controller"
	I1216 19:52:12.468695       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 19:52:12.470281       1 config.go:329] "Starting node config controller"
	I1216 19:52:12.472310       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 19:52:12.569322       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 19:52:12.569326       1 shared_informer.go:320] Caches are synced for service config
	I1216 19:52:12.572470       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [446adec279b3] <==
	W1216 19:51:01.499231       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 19:51:01.499289       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:51:01.578208       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 19:51:01.578259       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:51:01.747435       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 19:51:01.747487       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:51:02.889902       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 19:51:02.890154       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:51:03.991404       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 19:51:03.991445       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:51:04.061316       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 19:51:04.061357       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 19:51:04.102087       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 19:51:04.102132       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:51:04.387747       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E1216 19:51:04.387790       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1216 19:51:15.904476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1216 19:54:05.861793       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-2bw6v\": pod busybox-58667487b6-2bw6v is already assigned to node \"ha-082404-m04\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-2bw6v" node="ha-082404-m04"
	E1216 19:54:05.862453       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 6790bfd4-9506-4ddc-87a9-15b3648efed0(default/busybox-58667487b6-2bw6v) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-2bw6v"
	E1216 19:54:05.862584       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-2bw6v\": pod busybox-58667487b6-2bw6v is already assigned to node \"ha-082404-m04\"" pod="default/busybox-58667487b6-2bw6v"
	I1216 19:54:05.862696       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-2bw6v" node="ha-082404-m04"
	I1216 19:54:39.355592       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1216 19:54:39.355633       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 19:54:39.355860       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1216 19:54:39.356634       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d8ab2b1e58da] <==
	W1216 19:55:20.341483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 19:55:20.341701       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:20.453090       1 reflector.go:569] runtime/asm_arm64.s:1223: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 19:55:20.453301       1 reflector.go:166] "Unhandled Error" err="runtime/asm_arm64.s:1223: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1216 19:55:20.606441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 19:55:20.606539       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:20.662040       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 19:55:20.662214       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:20.692269       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 19:55:20.692480       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:20.729246       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 19:55:20.729379       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:21.043797       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 19:55:21.043965       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:21.249588       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 19:55:21.249733       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:21.401310       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 19:55:21.401443       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:21.785339       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 19:55:21.785381       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:22.450063       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 19:55:22.450222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:55:22.614569       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 19:55:22.614628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1216 19:55:30.901803       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.732749    1550 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j77pt,ReadOnly:true,MountPath:/var/run/secrets/kubern
etes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunA
sGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-668d6bf9bc-mwl2r_kube-system(84f8cad3-3121-4fae-83c0-9fe5c573d6d4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: I1216 19:55:41.738426    1550 scope.go:117] "RemoveContainer" containerID="7c8e374b5911964b21d6497101b917d7f7444905fb8aca42d07a5d36a6f1c607"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: I1216 19:55:41.738850    1550 scope.go:117] "RemoveContainer" containerID="1eb818213998658e85b4556cdc08f8d088f053cdbc968204be4192e5796cb9e1"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.744474    1550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-668d6bf9bc-mwl2r" podUID="84f8cad3-3121-4fae-83c0-9fe5c573d6d4"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.746500    1550 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.32.0,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountP
ath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xpc6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-proxy-x7xbp_kube-system(ce0d4ca6-fbc9-4f2f-996d-5bd01b41a14f): CreateContainerConfigError: services have not yet been read at least once, cannot const
ruct envvars" logger="UnhandledError"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.750977    1550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-x7xbp" podUID="ce0d4ca6-fbc9-4f2f-996d-5bd01b41a14f"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.746597    1550 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmfqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProb
e:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(3c0d0135-4746-4b03-9877-d30c5297116e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.760817    1550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="3c0d0135-4746-4b03-9877-d30c5297116e"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.772568    1550 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20241108-5c6d2daf,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMoun
t{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gc5k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnc
e:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-8nzqx_kube-system(c062cfe1-2c57-4040-8d48-673a935f60f6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Dec 16 19:55:41 ha-082404 kubelet[1550]: E1216 19:55:41.778058    1550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-8nzqx" podUID="c062cfe1-2c57-4040-8d48-673a935f60f6"
	Dec 16 19:55:43 ha-082404 kubelet[1550]: I1216 19:55:43.730445    1550 scope.go:117] "RemoveContainer" containerID="6210fc1a4717d690ac0ea2f282f72ccf2e2fd735a51b3dc5aa99de9648fb8d0c"
	Dec 16 19:55:43 ha-082404 kubelet[1550]: E1216 19:55:43.732690    1550 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2pns,ReadOnly:true,MountPath:/var/run/secrets/kubern
etes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunA
sGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-668d6bf9bc-9th4p_kube-system(56bab989-75df-426f-af86-73cef2741306): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Dec 16 19:55:43 ha-082404 kubelet[1550]: E1216 19:55:43.734142    1550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-668d6bf9bc-9th4p" podUID="56bab989-75df-426f-af86-73cef2741306"
	Dec 16 19:55:48 ha-082404 kubelet[1550]: E1216 19:55:48.944030    1550 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Dec 16 19:55:48 ha-082404 kubelet[1550]: E1216 19:55:48.944090    1550 helpers.go:851] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Dec 16 19:55:52 ha-082404 kubelet[1550]: I1216 19:55:52.729897    1550 scope.go:117] "RemoveContainer" containerID="6a5762f4756925da37a388b994da6d8386d3c97d40f775fc2107416eeda2fcf8"
	Dec 16 19:55:52 ha-082404 kubelet[1550]: E1216 19:55:52.730106    1550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-668d6bf9bc-mwl2r_kube-system(84f8cad3-3121-4fae-83c0-9fe5c573d6d4)\"" pod="kube-system/coredns-668d6bf9bc-mwl2r" podUID="84f8cad3-3121-4fae-83c0-9fe5c573d6d4"
	Dec 16 19:55:53 ha-082404 kubelet[1550]: I1216 19:55:53.729872    1550 scope.go:117] "RemoveContainer" containerID="7c8e374b5911964b21d6497101b917d7f7444905fb8aca42d07a5d36a6f1c607"
	Dec 16 19:55:54 ha-082404 kubelet[1550]: I1216 19:55:54.730523    1550 scope.go:117] "RemoveContainer" containerID="fb3fa2313cf97b14b6691ed06c9a4e06b659cbf82612c1aa2f5f293aae0521b5"
	Dec 16 19:55:55 ha-082404 kubelet[1550]: I1216 19:55:55.729910    1550 scope.go:117] "RemoveContainer" containerID="96368e60b6cd3376c22e9babff0f7805b393bd16245039735963d803d363c107"
	Dec 16 19:55:55 ha-082404 kubelet[1550]: I1216 19:55:55.730236    1550 scope.go:117] "RemoveContainer" containerID="6210fc1a4717d690ac0ea2f282f72ccf2e2fd735a51b3dc5aa99de9648fb8d0c"
	Dec 16 19:55:55 ha-082404 kubelet[1550]: E1216 19:55:55.730588    1550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-668d6bf9bc-9th4p_kube-system(56bab989-75df-426f-af86-73cef2741306)\"" pod="kube-system/coredns-668d6bf9bc-9th4p" podUID="56bab989-75df-426f-af86-73cef2741306"
	Dec 16 19:55:56 ha-082404 kubelet[1550]: I1216 19:55:56.732954    1550 scope.go:117] "RemoveContainer" containerID="1eb818213998658e85b4556cdc08f8d088f053cdbc968204be4192e5796cb9e1"
	Dec 16 19:56:05 ha-082404 kubelet[1550]: I1216 19:56:05.730162    1550 scope.go:117] "RemoveContainer" containerID="6a5762f4756925da37a388b994da6d8386d3c97d40f775fc2107416eeda2fcf8"
	Dec 16 19:56:10 ha-082404 kubelet[1550]: I1216 19:56:10.732085    1550 scope.go:117] "RemoveContainer" containerID="6210fc1a4717d690ac0ea2f282f72ccf2e2fd735a51b3dc5aa99de9648fb8d0c"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-082404 -n ha-082404
helpers_test.go:261: (dbg) Run:  kubectl --context ha-082404 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (109.02s)

                                                
                                    

Test pass (319/345)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.32
9 TestDownloadOnly/v1.20.0/DeleteAll 0.32
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.32.0/json-events 6.71
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.09
18 TestDownloadOnly/v1.32.0/DeleteAll 0.21
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 58.99
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 224.48
29 TestAddons/serial/Volcano 41.96
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.94
35 TestAddons/parallel/Registry 16.32
36 TestAddons/parallel/Ingress 20.53
37 TestAddons/parallel/InspektorGadget 12.06
38 TestAddons/parallel/MetricsServer 6.78
40 TestAddons/parallel/CSI 48.37
41 TestAddons/parallel/Headlamp 17.86
42 TestAddons/parallel/CloudSpanner 6.55
43 TestAddons/parallel/LocalPath 53.95
44 TestAddons/parallel/NvidiaDevicePlugin 5.6
45 TestAddons/parallel/Yakd 11.99
47 TestAddons/StoppedEnableDisable 11.18
48 TestCertOptions 40.05
49 TestCertExpiration 252.41
50 TestDockerFlags 44.19
51 TestForceSystemdFlag 35.71
52 TestForceSystemdEnv 48.49
58 TestErrorSpam/setup 31.25
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.27
61 TestErrorSpam/pause 1.54
62 TestErrorSpam/unpause 1.42
63 TestErrorSpam/stop 2.06
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.37
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.6
70 TestFunctional/serial/KubeContext 0.08
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
75 TestFunctional/serial/CacheCmd/cache/add_local 0.96
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 43.19
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.18
86 TestFunctional/serial/LogsFileCmd 1.24
87 TestFunctional/serial/InvalidService 4.82
89 TestFunctional/parallel/ConfigCmd 0.64
90 TestFunctional/parallel/DashboardCmd 14.52
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.17
97 TestFunctional/parallel/ServiceCmdConnect 15.64
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 31.48
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.38
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.19
109 TestFunctional/parallel/NodeLabels 0.13
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
129 TestFunctional/parallel/ServiceCmd/List 0.62
130 TestFunctional/parallel/MountCmd/any-port 8.59
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.72
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
133 TestFunctional/parallel/ServiceCmd/Format 0.46
134 TestFunctional/parallel/ServiceCmd/URL 0.55
135 TestFunctional/parallel/MountCmd/specific-port 2
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.83
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.33
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.55
144 TestFunctional/parallel/ImageCommands/Setup 0.74
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.13
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.81
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
151 TestFunctional/parallel/DockerEnv/bash 1.45
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 128.68
163 TestMultiControlPlane/serial/DeployApp 8.53
164 TestMultiControlPlane/serial/PingHostFromPods 1.68
165 TestMultiControlPlane/serial/AddWorkerNode 27.36
166 TestMultiControlPlane/serial/NodeLabels 0.14
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.73
169 TestMultiControlPlane/serial/StopSecondaryNode 11.76
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
171 TestMultiControlPlane/serial/RestartSecondaryNode 37.28
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.25
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 256.84
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.65
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 32.85
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.13
179 TestMultiControlPlane/serial/AddSecondaryNode 44.55
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
183 TestImageBuild/serial/Setup 31.01
184 TestImageBuild/serial/NormalBuild 2.42
185 TestImageBuild/serial/BuildWithBuildArg 1.19
186 TestImageBuild/serial/BuildWithDockerIgnore 0.95
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.8
191 TestJSONOutput/start/Command 49.83
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.61
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.53
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 10.89
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.24
216 TestKicCustomNetwork/create_custom_network 40.16
217 TestKicCustomNetwork/use_default_bridge_network 35.23
218 TestKicExistingNetwork 31.81
219 TestKicCustomSubnet 34.99
220 TestKicStaticIP 33.71
221 TestMainNoArgs 0.05
222 TestMinikubeProfile 75.41
225 TestMountStart/serial/StartWithMountFirst 7.67
226 TestMountStart/serial/VerifyMountFirst 0.28
227 TestMountStart/serial/StartWithMountSecond 8.06
228 TestMountStart/serial/VerifyMountSecond 0.27
229 TestMountStart/serial/DeleteFirst 1.48
230 TestMountStart/serial/VerifyMountPostDelete 0.29
231 TestMountStart/serial/Stop 1.2
232 TestMountStart/serial/RestartStopped 8.22
233 TestMountStart/serial/VerifyMountPostStop 0.31
236 TestMultiNode/serial/FreshStart2Nodes 80.95
237 TestMultiNode/serial/DeployApp2Nodes 48.87
238 TestMultiNode/serial/PingHostFrom2Pods 1.08
239 TestMultiNode/serial/AddNode 18.58
240 TestMultiNode/serial/MultiNodeLabels 0.12
241 TestMultiNode/serial/ProfileList 0.71
242 TestMultiNode/serial/CopyFile 10.48
243 TestMultiNode/serial/StopNode 2.37
244 TestMultiNode/serial/StartAfterStop 11.04
245 TestMultiNode/serial/RestartKeepsNodes 105.58
246 TestMultiNode/serial/DeleteNode 5.83
247 TestMultiNode/serial/StopMultiNode 21.6
248 TestMultiNode/serial/RestartMultiNode 51.68
249 TestMultiNode/serial/ValidateNameConflict 34.04
254 TestPreload 106.7
256 TestScheduledStopUnix 105.38
257 TestSkaffold 119.49
259 TestInsufficientStorage 13.63
260 TestRunningBinaryUpgrade 87.09
262 TestKubernetesUpgrade 388.02
263 TestMissingContainerUpgrade 163.64
265 TestPause/serial/Start 50.07
266 TestPause/serial/SecondStartNoReconfiguration 35.43
267 TestPause/serial/Pause 0.95
268 TestPause/serial/VerifyStatus 0.44
269 TestPause/serial/Unpause 0.76
270 TestPause/serial/PauseAgain 1.09
271 TestPause/serial/DeletePaused 2.81
272 TestPause/serial/VerifyDeletedResources 0.14
273 TestStoppedBinaryUpgrade/Setup 1.01
274 TestStoppedBinaryUpgrade/Upgrade 83.7
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.4
284 TestNoKubernetes/serial/StartNoK8sWithVersion 0.15
285 TestNoKubernetes/serial/StartWithK8s 43.76
286 TestNoKubernetes/serial/StartWithStopK8s 17.42
298 TestNoKubernetes/serial/Start 10.75
299 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
300 TestNoKubernetes/serial/ProfileList 1.2
301 TestNoKubernetes/serial/Stop 1.28
302 TestNoKubernetes/serial/StartNoArgs 8.73
303 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
305 TestStartStop/group/old-k8s-version/serial/FirstStart 167.93
307 TestStartStop/group/no-preload/serial/FirstStart 85.28
308 TestStartStop/group/old-k8s-version/serial/DeployApp 13.75
309 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.57
310 TestStartStop/group/old-k8s-version/serial/Stop 11.33
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
312 TestStartStop/group/old-k8s-version/serial/SecondStart 373.85
313 TestStartStop/group/no-preload/serial/DeployApp 11.44
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
315 TestStartStop/group/no-preload/serial/Stop 11.08
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 266.73
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
321 TestStartStop/group/no-preload/serial/Pause 3.03
323 TestStartStop/group/embed-certs/serial/FirstStart 51.3
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
327 TestStartStop/group/old-k8s-version/serial/Pause 3.56
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.51
330 TestStartStop/group/embed-certs/serial/DeployApp 9.47
331 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
332 TestStartStop/group/embed-certs/serial/Stop 10.87
333 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
334 TestStartStop/group/embed-certs/serial/SecondStart 267.02
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.53
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.53
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.25
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.21
340 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
342 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
343 TestStartStop/group/embed-certs/serial/Pause 2.94
345 TestStartStop/group/newest-cni/serial/FirstStart 39.52
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.5
350 TestNetworkPlugins/group/auto/Start 52.67
351 TestStartStop/group/newest-cni/serial/DeployApp 0
352 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
353 TestStartStop/group/newest-cni/serial/Stop 8.19
354 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
355 TestStartStop/group/newest-cni/serial/SecondStart 27.48
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
359 TestStartStop/group/newest-cni/serial/Pause 3.77
360 TestNetworkPlugins/group/custom-flannel/Start 57.86
361 TestNetworkPlugins/group/auto/KubeletFlags 0.36
362 TestNetworkPlugins/group/auto/NetCatPod 13.35
363 TestNetworkPlugins/group/auto/DNS 0.27
364 TestNetworkPlugins/group/auto/Localhost 0.19
365 TestNetworkPlugins/group/auto/HairPin 0.22
366 TestNetworkPlugins/group/false/Start 58
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.38
369 TestNetworkPlugins/group/custom-flannel/DNS 0.25
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
372 TestNetworkPlugins/group/kindnet/Start 73.47
373 TestNetworkPlugins/group/false/KubeletFlags 0.36
374 TestNetworkPlugins/group/false/NetCatPod 11.37
375 TestNetworkPlugins/group/false/DNS 0.28
376 TestNetworkPlugins/group/false/Localhost 0.28
377 TestNetworkPlugins/group/false/HairPin 0.18
378 TestNetworkPlugins/group/flannel/Start 57.97
379 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
380 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
381 TestNetworkPlugins/group/kindnet/NetCatPod 14.37
382 TestNetworkPlugins/group/kindnet/DNS 0.32
383 TestNetworkPlugins/group/kindnet/Localhost 0.26
384 TestNetworkPlugins/group/kindnet/HairPin 0.26
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
387 TestNetworkPlugins/group/flannel/NetCatPod 12.39
388 TestNetworkPlugins/group/enable-default-cni/Start 56.98
389 TestNetworkPlugins/group/flannel/DNS 0.23
390 TestNetworkPlugins/group/flannel/Localhost 0.22
391 TestNetworkPlugins/group/flannel/HairPin 0.2
392 TestNetworkPlugins/group/bridge/Start 78.55
393 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
394 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.35
395 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
396 TestNetworkPlugins/group/enable-default-cni/Localhost 0.29
397 TestNetworkPlugins/group/enable-default-cni/HairPin 0.25
398 TestNetworkPlugins/group/kubenet/Start 40.9
399 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
400 TestNetworkPlugins/group/bridge/NetCatPod 13.35
401 TestNetworkPlugins/group/bridge/DNS 0.3
402 TestNetworkPlugins/group/bridge/Localhost 0.19
403 TestNetworkPlugins/group/bridge/HairPin 0.19
404 TestNetworkPlugins/group/kubenet/KubeletFlags 0.45
405 TestNetworkPlugins/group/kubenet/NetCatPod 11.4
406 TestNetworkPlugins/group/kubenet/DNS 21.96
407 TestNetworkPlugins/group/calico/Start 76.66
408 TestNetworkPlugins/group/kubenet/Localhost 0.18
409 TestNetworkPlugins/group/kubenet/HairPin 0.25
410 TestNetworkPlugins/group/calico/ControllerPod 6.01
411 TestNetworkPlugins/group/calico/KubeletFlags 0.31
412 TestNetworkPlugins/group/calico/NetCatPod 10.26
413 TestNetworkPlugins/group/calico/DNS 0.18
414 TestNetworkPlugins/group/calico/Localhost 0.16
415 TestNetworkPlugins/group/calico/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-478782 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-478782 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.001784103s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1216 19:34:57.464019    7569 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1216 19:34:57.464098    7569 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-478782
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-478782: exit status 85 (319.663988ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-478782 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |          |
	|         | -p download-only-478782        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 19:34:46
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 19:34:46.511122    7575 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:34:46.511294    7575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:46.511306    7575 out.go:358] Setting ErrFile to fd 2...
	I1216 19:34:46.511311    7575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:46.511614    7575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	W1216 19:34:46.511825    7575 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20091-2258/.minikube/config/config.json: open /home/jenkins/minikube-integration/20091-2258/.minikube/config/config.json: no such file or directory
	I1216 19:34:46.512285    7575 out.go:352] Setting JSON to true
	I1216 19:34:46.513091    7575 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1032,"bootTime":1734376655,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 19:34:46.513160    7575 start.go:139] virtualization:  
	I1216 19:34:46.516905    7575 out.go:97] [download-only-478782] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1216 19:34:46.517134    7575 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 19:34:46.517174    7575 notify.go:220] Checking for updates...
	I1216 19:34:46.519963    7575 out.go:169] MINIKUBE_LOCATION=20091
	I1216 19:34:46.522817    7575 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:34:46.525553    7575 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:34:46.528377    7575 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	I1216 19:34:46.531085    7575 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 19:34:46.536390    7575 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 19:34:46.536658    7575 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:34:46.567387    7575 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 19:34:46.567508    7575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:34:46.923566    7575 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-16 19:34:46.914373954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:34:46.923675    7575 docker.go:318] overlay module found
	I1216 19:34:46.926595    7575 out.go:97] Using the docker driver based on user configuration
	I1216 19:34:46.926628    7575 start.go:297] selected driver: docker
	I1216 19:34:46.926636    7575 start.go:901] validating driver "docker" against <nil>
	I1216 19:34:46.926747    7575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:34:46.989414    7575 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-16 19:34:46.980898116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:34:46.989619    7575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 19:34:46.989941    7575 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1216 19:34:46.990107    7575 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 19:34:46.992900    7575 out.go:169] Using Docker driver with root privileges
	I1216 19:34:46.995524    7575 cni.go:84] Creating CNI manager for ""
	I1216 19:34:46.995593    7575 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1216 19:34:46.995669    7575 start.go:340] cluster config:
	{Name:download-only-478782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-478782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:34:46.998529    7575 out.go:97] Starting "download-only-478782" primary control-plane node in "download-only-478782" cluster
	I1216 19:34:46.998565    7575 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 19:34:47.001327    7575 out.go:97] Pulling base image v0.0.45-1734029593-20090 ...
	I1216 19:34:47.001363    7575 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 19:34:47.001520    7575 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1216 19:34:47.018521    7575 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1216 19:34:47.018716    7575 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1216 19:34:47.018823    7575 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1216 19:34:47.060574    7575 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 19:34:47.060608    7575 cache.go:56] Caching tarball of preloaded images
	I1216 19:34:47.060780    7575 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1216 19:34:47.064315    7575 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 19:34:47.064339    7575 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 19:34:47.160731    7575 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1216 19:34:55.850969    7575 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 19:34:55.851073    7575 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-478782 host does not exist
	  To start a cluster, run: "minikube start -p download-only-478782"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-478782
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (6.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-847332 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-847332 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.713103688s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (6.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I1216 19:35:04.982255    7569 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
I1216 19:35:04.982297    7569 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-847332
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-847332: exit status 85 (85.222406ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-478782 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |                     |
	|         | -p download-only-478782        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| delete  | -p download-only-478782        | download-only-478782 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| start   | -o=json --download-only        | download-only-847332 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |                     |
	|         | -p download-only-847332        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 19:34:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 19:34:58.324609    7774 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:34:58.324763    7774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:58.324791    7774 out.go:358] Setting ErrFile to fd 2...
	I1216 19:34:58.324796    7774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:58.325083    7774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 19:34:58.325518    7774 out.go:352] Setting JSON to true
	I1216 19:34:58.326305    7774 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1044,"bootTime":1734376655,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 19:34:58.326380    7774 start.go:139] virtualization:  
	I1216 19:34:58.329946    7774 out.go:97] [download-only-847332] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 19:34:58.330164    7774 notify.go:220] Checking for updates...
	I1216 19:34:58.333218    7774 out.go:169] MINIKUBE_LOCATION=20091
	I1216 19:34:58.336188    7774 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:34:58.339066    7774 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:34:58.341903    7774 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	I1216 19:34:58.344746    7774 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 19:34:58.350122    7774 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 19:34:58.350403    7774 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:34:58.375382    7774 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 19:34:58.375494    7774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:34:58.444146    7774 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-16 19:34:58.435562945 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:34:58.444256    7774 docker.go:318] overlay module found
	I1216 19:34:58.447099    7774 out.go:97] Using the docker driver based on user configuration
	I1216 19:34:58.447126    7774 start.go:297] selected driver: docker
	I1216 19:34:58.447134    7774 start.go:901] validating driver "docker" against <nil>
	I1216 19:34:58.447229    7774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:34:58.503124    7774 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-16 19:34:58.494298687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:34:58.503334    7774 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 19:34:58.503616    7774 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1216 19:34:58.503782    7774 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 19:34:58.506714    7774 out.go:169] Using Docker driver with root privileges
	I1216 19:34:58.509348    7774 cni.go:84] Creating CNI manager for ""
	I1216 19:34:58.509418    7774 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 19:34:58.509441    7774 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 19:34:58.509530    7774 start.go:340] cluster config:
	{Name:download-only-847332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-847332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:34:58.512363    7774 out.go:97] Starting "download-only-847332" primary control-plane node in "download-only-847332" cluster
	I1216 19:34:58.512388    7774 cache.go:121] Beginning downloading kic base image for docker with docker
	I1216 19:34:58.515032    7774 out.go:97] Pulling base image v0.0.45-1734029593-20090 ...
	I1216 19:34:58.515059    7774 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 19:34:58.515225    7774 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1216 19:34:58.531588    7774 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1216 19:34:58.531739    7774 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1216 19:34:58.531763    7774 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory, skipping pull
	I1216 19:34:58.531772    7774 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in cache, skipping pull
	I1216 19:34:58.531780    7774 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	I1216 19:34:58.576349    7774 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	I1216 19:34:58.576379    7774 cache.go:56] Caching tarball of preloaded images
	I1216 19:34:58.576539    7774 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime docker
	I1216 19:34:58.579338    7774 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I1216 19:34:58.579362    7774 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4 ...
	I1216 19:34:58.677049    7774 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4?checksum=md5:ff0c92f745fa493248e668330d02c326 -> /home/jenkins/minikube-integration/20091-2258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-847332 host does not exist
	  To start a cluster, run: "minikube start -p download-only-847332"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-847332
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 19:35:06.242412    7569 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-991316 --alsologtostderr --binary-mirror http://127.0.0.1:38107 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-991316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-991316
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (58.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-103154 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-103154 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (56.849155431s)
helpers_test.go:175: Cleaning up "offline-docker-103154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-103154
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-103154: (2.137667362s)
--- PASS: TestOffline (58.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-309585
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-309585: exit status 85 (75.390938ms)

                                                
                                                
-- stdout --
	* Profile "addons-309585" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-309585"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-309585
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-309585: exit status 85 (80.696605ms)

                                                
                                                
-- stdout --
	* Profile "addons-309585" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-309585"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (224.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-309585 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-309585 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m44.474346501s)
--- PASS: TestAddons/Setup (224.48s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 56.807961ms
addons_test.go:823: volcano-controller stabilized in 56.920163ms
addons_test.go:815: volcano-admission stabilized in 57.865414ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-22l7n" [0ea7ad1d-3794-4a79-9b98-400820129be4] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003839347s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-4w8mw" [2c19d3e4-a1e7-4984-ac9e-0d2f34b2a482] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003718181s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-pv46h" [1ccc96f6-e93c-4a82-b6f8-5893d7f96b66] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003954314s
addons_test.go:842: (dbg) Run:  kubectl --context addons-309585 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-309585 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-309585 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [594ad938-c545-4271-8470-788a4a717e91] Pending
helpers_test.go:344: "test-job-nginx-0" [594ad938-c545-4271-8470-788a4a717e91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [594ad938-c545-4271-8470-788a4a717e91] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003928242s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable volcano --alsologtostderr -v=1: (11.290878986s)
--- PASS: TestAddons/serial/Volcano (41.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-309585 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-309585 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.94s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-309585 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-309585 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f73594f8-031d-416c-83c1-af102aadf8de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f73594f8-031d-416c-83c1-af102aadf8de] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003924985s
addons_test.go:633: (dbg) Run:  kubectl --context addons-309585 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-309585 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-309585 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-309585 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.811355ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-jl4n6" [f28edd82-6ff8-4732-bf2f-f90a78375575] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004668139s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fkz7r" [4cb9775b-f35b-415a-bf2f-5e43f7184e40] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004999175s
addons_test.go:331: (dbg) Run:  kubectl --context addons-309585 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-309585 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-309585 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.270936237s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 ip
2024/12/16 19:40:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.32s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-309585 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-309585 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-309585 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3e380cf3-ce63-45e2-b623-27d12c611f13] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3e380cf3-ce63-45e2-b623-27d12c611f13] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004486706s
I1216 19:41:33.277246    7569 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-309585 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable ingress-dns --alsologtostderr -v=1: (1.158102822s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable ingress --alsologtostderr -v=1: (7.745379147s)
--- PASS: TestAddons/parallel/Ingress (20.53s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-s6d4l" [ac87679f-2c8d-4cd7-8e7b-67613aaa925f] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005465851s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable inspektor-gadget --alsologtostderr -v=1: (6.05116287s)
--- PASS: TestAddons/parallel/InspektorGadget (12.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 9.680977ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-jrvpq" [97a4f66f-7fc8-4a65-8cca-63b6358835b6] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004044943s
addons_test.go:402: (dbg) Run:  kubectl --context addons-309585 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 19:40:35.558700    7569 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 19:40:35.564206    7569 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 19:40:35.564238    7569 kapi.go:107] duration metric: took 8.333064ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.343763ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-309585 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-309585 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cde1daa7-7579-4ff8-a7fb-07e7fbb23f58] Pending
helpers_test.go:344: "task-pv-pod" [cde1daa7-7579-4ff8-a7fb-07e7fbb23f58] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cde1daa7-7579-4ff8-a7fb-07e7fbb23f58] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003669754s
addons_test.go:511: (dbg) Run:  kubectl --context addons-309585 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-309585 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-309585 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-309585 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-309585 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-309585 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-309585 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ccde80e3-3675-482e-826a-05b72041dd8e] Pending
helpers_test.go:344: "task-pv-pod-restore" [ccde80e3-3675-482e-826a-05b72041dd8e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ccde80e3-3675-482e-826a-05b72041dd8e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004018259s
addons_test.go:553: (dbg) Run:  kubectl --context addons-309585 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-309585 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-309585 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.903051852s)
--- PASS: TestAddons/parallel/CSI (48.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-309585 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-309585 --alsologtostderr -v=1: (1.072180507s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-zpk68" [2195ec9c-dbfb-40fa-9351-2874be2e5455] Pending
helpers_test.go:344: "headlamp-69d78d796f-zpk68" [2195ec9c-dbfb-40fa-9351-2874be2e5455] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-zpk68" [2195ec9c-dbfb-40fa-9351-2874be2e5455] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003747092s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable headlamp --alsologtostderr -v=1: (5.77826118s)
--- PASS: TestAddons/parallel/Headlamp (17.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5498fbc9c4-v26sn" [fff8e206-f033-428e-8b7d-d998bd643c9b] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003556781s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-309585 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-309585 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-309585 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eb92713f-dbfa-4095-8686-2eeaa371fa27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eb92713f-dbfa-4095-8686-2eeaa371fa27] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eb92713f-dbfa-4095-8686-2eeaa371fa27] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003618477s
addons_test.go:906: (dbg) Run:  kubectl --context addons-309585 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 ssh "cat /opt/local-path-provisioner/pvc-6671b59e-4b84-4bc1-a120-cd3cb3df3a2f_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-309585 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-309585 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.479703991s)
--- PASS: TestAddons/parallel/LocalPath (53.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4dwfp" [fb4d2661-d4b8-4f08-9808-b5cc59d00150] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004190781s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-dhrpz" [d3b404ce-402e-410e-a432-534e804f5d92] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00413239s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-309585 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-309585 addons disable yakd --alsologtostderr -v=1: (5.988728882s)
--- PASS: TestAddons/parallel/Yakd (11.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-309585
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-309585: (10.882229901s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-309585
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-309585
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-309585
--- PASS: TestAddons/StoppedEnableDisable (11.18s)

                                                
                                    
x
+
TestCertOptions (40.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-940138 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E1216 20:26:24.446982    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-940138 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (37.185297378s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-940138 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-940138 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-940138 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-940138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-940138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-940138: (2.177435964s)
--- PASS: TestCertOptions (40.05s)

                                                
                                    
x
+
TestCertExpiration (252.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-886449 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-886449 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (46.661178035s)
E1216 20:25:56.744197    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-886449 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E1216 20:28:51.430723    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-886449 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (23.51599107s)
helpers_test.go:175: Cleaning up "cert-expiration-886449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-886449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-886449: (2.230495633s)
--- PASS: TestCertExpiration (252.41s)

                                                
                                    
x
+
TestDockerFlags (44.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-611707 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-611707 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.3878469s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-611707 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-611707 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-611707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-611707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-611707: (2.170039994s)
--- PASS: TestDockerFlags (44.19s)

                                                
                                    
x
+
TestForceSystemdFlag (35.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-705217 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-705217 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.577178315s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-705217 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-705217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-705217
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-705217: (2.659646208s)
--- PASS: TestForceSystemdFlag (35.71s)

                                                
                                    
x
+
TestForceSystemdEnv (48.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-504757 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-504757 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.638040555s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-504757 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-504757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-504757
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-504757: (2.335958015s)
--- PASS: TestForceSystemdEnv (48.49s)

                                                
                                    
x
+
TestErrorSpam/setup (31.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-331848 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-331848 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-331848 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-331848 --driver=docker  --container-runtime=docker: (31.25285979s)
--- PASS: TestErrorSpam/setup (31.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 status
--- PASS: TestErrorSpam/status (1.27s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 stop: (1.840187726s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-331848 --log_dir /tmp/nospam-331848 stop
--- PASS: TestErrorSpam/stop (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20091-2258/.minikube/files/etc/test/nested/copy/7569/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-690644 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-690644 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (41.372274013s)
--- PASS: TestFunctional/serial/StartWithProxy (41.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 19:43:18.735199    7569 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-690644 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-690644 --alsologtostderr -v=8: (28.598362432s)
functional_test.go:663: soft start took 28.601097996s for "functional-690644" cluster.
I1216 19:43:47.333953    7569 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (28.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-690644 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-690644 cache add registry.k8s.io/pause:3.1: (1.15763787s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-690644 cache add registry.k8s.io/pause:3.3: (1.197319792s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-690644 cache add registry.k8s.io/pause:latest: (1.104948805s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-690644 /tmp/TestFunctionalserialCacheCmdcacheadd_local1252713758/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cache add minikube-local-cache-test:functional-690644
E1216 19:43:51.432188    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:43:51.438572    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:43:51.449968    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:43:51.471873    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:43:51.513232    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:43:51.594821    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:43:51.756266    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cache delete minikube-local-cache-test:functional-690644
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-690644
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh sudo crictl images
E1216 19:43:52.077505    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1216 19:43:52.719416    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.111071ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1216 19:43:54.001478    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 kubectl -- --context functional-690644 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-690644 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-690644 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 19:43:56.563760    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:44:01.686077    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:44:11.927486    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:44:32.408825    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-690644 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.189616162s)
functional_test.go:761: restart took 43.189716221s for "functional-690644" cluster.
I1216 19:44:37.610118    7569 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (43.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-690644 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-690644 logs: (1.183191994s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 logs --file /tmp/TestFunctionalserialLogsFileCmd945673230/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-690644 logs --file /tmp/TestFunctionalserialLogsFileCmd945673230/001/logs.txt: (1.237405032s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-690644 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-690644
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-690644: exit status 115 (793.158572ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30250 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-690644 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 config get cpus: exit status 14 (163.838689ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 config get cpus: exit status 14 (87.198672ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-690644 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-690644 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49852: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-690644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-690644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (220.93053ms)

                                                
                                                
-- stdout --
	* [functional-690644] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 19:45:23.979059   49527 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:45:23.979204   49527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:45:23.979216   49527 out.go:358] Setting ErrFile to fd 2...
	I1216 19:45:23.979237   49527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:45:23.979638   49527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 19:45:23.980709   49527 out.go:352] Setting JSON to false
	I1216 19:45:23.982043   49527 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1669,"bootTime":1734376655,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 19:45:23.982149   49527 start.go:139] virtualization:  
	I1216 19:45:23.987243   49527 out.go:177] * [functional-690644] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 19:45:23.990468   49527 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 19:45:23.990565   49527 notify.go:220] Checking for updates...
	I1216 19:45:23.995865   49527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:45:23.998424   49527 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:45:24.001025   49527 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	I1216 19:45:24.003575   49527 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 19:45:24.006107   49527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 19:45:24.011016   49527 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:45:24.011602   49527 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:45:24.041957   49527 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 19:45:24.042093   49527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:45:24.119113   49527 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-16 19:45:24.109110764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:45:24.120211   49527 docker.go:318] overlay module found
	I1216 19:45:24.123110   49527 out.go:177] * Using the docker driver based on existing profile
	I1216 19:45:24.125657   49527 start.go:297] selected driver: docker
	I1216 19:45:24.125679   49527 start.go:901] validating driver "docker" against &{Name:functional-690644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-690644 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:45:24.125793   49527 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 19:45:24.129049   49527 out.go:201] 
	W1216 19:45:24.131642   49527 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 19:45:24.134327   49527 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-690644 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-690644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-690644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (211.590216ms)

                                                
                                                
-- stdout --
	* [functional-690644] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 19:45:23.771185   49478 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:45:23.771348   49478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:45:23.771354   49478 out.go:358] Setting ErrFile to fd 2...
	I1216 19:45:23.771359   49478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:45:23.772269   49478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 19:45:23.772753   49478 out.go:352] Setting JSON to false
	I1216 19:45:23.773799   49478 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1669,"bootTime":1734376655,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1216 19:45:23.773922   49478 start.go:139] virtualization:  
	I1216 19:45:23.777663   49478 out.go:177] * [functional-690644] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1216 19:45:23.781105   49478 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 19:45:23.781224   49478 notify.go:220] Checking for updates...
	I1216 19:45:23.786558   49478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:45:23.789075   49478 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	I1216 19:45:23.791927   49478 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	I1216 19:45:23.794604   49478 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 19:45:23.797310   49478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 19:45:23.800423   49478 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:45:23.800930   49478 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:45:23.827127   49478 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 19:45:23.827247   49478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:45:23.896772   49478 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-16 19:45:23.88646387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:45:23.896885   49478 docker.go:318] overlay module found
	I1216 19:45:23.900544   49478 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1216 19:45:23.903654   49478 start.go:297] selected driver: docker
	I1216 19:45:23.903680   49478 start.go:901] validating driver "docker" against &{Name:functional-690644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-690644 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:45:23.903803   49478 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 19:45:23.907636   49478 out.go:201] 
	W1216 19:45:23.910410   49478 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 19:45:23.913124   49478 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-690644 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-690644 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-x9psn" [8e216583-3fb2-42d7-8981-49a058ff988a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-x9psn" [8e216583-3fb2-42d7-8981-49a058ff988a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.004481561s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31471
functional_test.go:1675: http://192.168.49.2:31471: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-x9psn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31471
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [26b592c0-4c21-4b11-8bc2-8adc336da411] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003963187s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-690644 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-690644 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-690644 get pvc myclaim -o=json
I1216 19:44:53.713908    7569 retry.go:31] will retry after 2.06528972s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:7c0bbfd9-9b98-4b9e-a66e-21c7635980a1 ResourceVersion:680 Generation:0 CreationTimestamp:2024-12-16 19:44:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40012d9260 VolumeMode:0x40012d9290 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-690644 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-690644 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4db426cf-fc94-45e9-b773-dac95a967c9b] Pending
helpers_test.go:344: "sp-pod" [4db426cf-fc94-45e9-b773-dac95a967c9b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4db426cf-fc94-45e9-b773-dac95a967c9b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004232127s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-690644 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-690644 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-690644 delete -f testdata/storage-provisioner/pod.yaml: (1.328915746s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-690644 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [03c16116-e8f7-4cbf-81f6-4493e12bc486] Pending
helpers_test.go:344: "sp-pod" [03c16116-e8f7-4cbf-81f6-4493e12bc486] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [03c16116-e8f7-4cbf-81f6-4493e12bc486] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003343294s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-690644 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh -n functional-690644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cp functional-690644:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1083871926/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh -n functional-690644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh -n functional-690644 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7569/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo cat /etc/test/nested/copy/7569/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7569.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo cat /etc/ssl/certs/7569.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7569.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo cat /usr/share/ca-certificates/7569.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75692.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo cat /etc/ssl/certs/75692.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75692.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo cat /usr/share/ca-certificates/75692.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-690644 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 ssh "sudo systemctl is-active crio": exit status 1 (400.120599ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-690644 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-690644 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-690644 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46766: os: process already finished
helpers_test.go:508: unable to kill pid 46578: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-690644 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-690644 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-690644 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ea8fa00b-19cf-45e9-b5ee-56ec965ee30c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ea8fa00b-19cf-45e9-b5ee-56ec965ee30c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004203809s
I1216 19:44:56.488219    7569 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-690644 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.31.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-690644 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-690644 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-690644 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-6kbgj" [76e8b4cd-e857-4e70-9ea5-ddd00f26fdfe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E1216 19:45:13.370993    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64fc58db8c-6kbgj" [76e8b4cd-e857-4e70-9ea5-ddd00f26fdfe] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00359639s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "366.249811ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "60.599321ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "472.554629ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "72.763013ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdany-port622128727/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734378320243418870" to /tmp/TestFunctionalparallelMountCmdany-port622128727/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734378320243418870" to /tmp/TestFunctionalparallelMountCmdany-port622128727/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734378320243418870" to /tmp/TestFunctionalparallelMountCmdany-port622128727/001/test-1734378320243418870
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (549.998882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 19:45:20.793697    7569 retry.go:31] will retry after 446.063618ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 19:45 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 19:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 19:45 test-1734378320243418870
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh cat /mount-9p/test-1734378320243418870
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-690644 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [65dd7954-a14c-4ac1-bb28-2a609fb894e3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [65dd7954-a14c-4ac1-bb28-2a609fb894e3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [65dd7954-a14c-4ac1-bb28-2a609fb894e3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003943008s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-690644 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdany-port622128727/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 service list -o json
functional_test.go:1494: Took "722.336312ms" to run "out/minikube-linux-arm64 -p functional-690644 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32154
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32154
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdspecific-port3839692385/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (476.650974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 19:45:29.313017    7569 retry.go:31] will retry after 309.767855ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdspecific-port3839692385/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 ssh "sudo umount -f /mount-9p": exit status 1 (333.377612ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-690644 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdspecific-port3839692385/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2889009102/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2889009102/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2889009102/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T" /mount1: exit status 1 (864.304813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 19:45:31.705023    7569 retry.go:31] will retry after 716.49745ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-690644 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2889009102/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2889009102/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-690644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2889009102/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-690644 version -o=json --components: (1.325561802s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-690644 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-690644
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-690644
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-690644 image ls --format short --alsologtostderr:
I1216 19:45:42.324504   52613 out.go:345] Setting OutFile to fd 1 ...
I1216 19:45:42.324778   52613 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.324809   52613 out.go:358] Setting ErrFile to fd 2...
I1216 19:45:42.324830   52613 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.325125   52613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
I1216 19:45:42.325981   52613 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.326195   52613 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.326941   52613 cli_runner.go:164] Run: docker container inspect functional-690644 --format={{.State.Status}}
I1216 19:45:42.345804   52613 ssh_runner.go:195] Run: systemctl --version
I1216 19:45:42.345919   52613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-690644
I1216 19:45:42.370549   52613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/functional-690644/id_rsa Username:docker}
I1216 19:45:42.474547   52613 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-690644 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-690644 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/etcd                        | 3.5.16-0          | 7fc9d4aa817aa | 142MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-690644 | 18afdfff8a5cf | 30B    |
| registry.k8s.io/kube-scheduler              | v1.32.0           | c3ff26fb59f37 | 67.9MB |
| registry.k8s.io/kube-proxy                  | v1.32.0           | 2f50386e20bfd | 97.1MB |
| docker.io/library/nginx                     | latest            | bdf62fd3a32f1 | 197MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-apiserver              | v1.32.0           | 2b5bd0f16085a | 93.9MB |
| docker.io/library/nginx                     | alpine            | dba92e6b64886 | 56.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.32.0           | a8d049396f6b8 | 87.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-690644 image ls --format table --alsologtostderr:
I1216 19:45:43.011075   52793 out.go:345] Setting OutFile to fd 1 ...
I1216 19:45:43.011267   52793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:43.011273   52793 out.go:358] Setting ErrFile to fd 2...
I1216 19:45:43.011294   52793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:43.011602   52793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
I1216 19:45:43.012315   52793 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:43.012436   52793 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:43.012920   52793 cli_runner.go:164] Run: docker container inspect functional-690644 --format={{.State.Status}}
I1216 19:45:43.040617   52793 ssh_runner.go:195] Run: systemctl --version
I1216 19:45:43.040684   52793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-690644
I1216 19:45:43.060229   52793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/functional-690644/id_rsa Username:docker}
I1216 19:45:43.162657   52793 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-690644 image ls --format json --alsologtostderr:
[{"id":"18afdfff8a5cf97e5e2bae14c3e760013bc10ca69cdad062a6cf0cae285d3f5c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-690644"],"size":"30"},{"id":"7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"142000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"93900000"},{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"56900000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf
392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"87200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["dock
er.io/kicbase/echo-server:functional-690644"],"size":"4780000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"67900000"},{"id":"2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"97100000"},{"id":"bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"si
ze":"29000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-690644 image ls --format json --alsologtostderr:
I1216 19:45:42.736542   52729 out.go:345] Setting OutFile to fd 1 ...
I1216 19:45:42.736718   52729 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.736747   52729 out.go:358] Setting ErrFile to fd 2...
I1216 19:45:42.736768   52729 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.738127   52729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
I1216 19:45:42.738922   52729 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.739070   52729 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.739551   52729 cli_runner.go:164] Run: docker container inspect functional-690644 --format={{.State.Status}}
I1216 19:45:42.761917   52729 ssh_runner.go:195] Run: systemctl --version
I1216 19:45:42.761968   52729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-690644
I1216 19:45:42.785064   52729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/functional-690644/id_rsa Username:docker}
I1216 19:45:42.890472   52729 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-690644 image ls --format yaml --alsologtostderr:
- id: 2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "93900000"
- id: c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "67900000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-690644
size: "4780000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 18afdfff8a5cf97e5e2bae14c3e760013bc10ca69cdad062a6cf0cae285d3f5c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-690644
size: "30"
- id: a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "87200000"
- id: 7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "142000000"
- id: bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "56900000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "97100000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-690644 image ls --format yaml --alsologtostderr:
I1216 19:45:42.450878   52651 out.go:345] Setting OutFile to fd 1 ...
I1216 19:45:42.452118   52651 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.452133   52651 out.go:358] Setting ErrFile to fd 2...
I1216 19:45:42.452140   52651 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.452536   52651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
I1216 19:45:42.453613   52651 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.453859   52651 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.454597   52651 cli_runner.go:164] Run: docker container inspect functional-690644 --format={{.State.Status}}
I1216 19:45:42.472907   52651 ssh_runner.go:195] Run: systemctl --version
I1216 19:45:42.472961   52651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-690644
I1216 19:45:42.501636   52651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/functional-690644/id_rsa Username:docker}
I1216 19:45:42.610941   52651 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-690644 ssh pgrep buildkitd: exit status 1 (338.23246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image build -t localhost/my-image:functional-690644 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-690644 image build -t localhost/my-image:functional-690644 testdata/build --alsologtostderr: (2.987318825s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-690644 image build -t localhost/my-image:functional-690644 testdata/build --alsologtostderr:
I1216 19:45:42.920225   52777 out.go:345] Setting OutFile to fd 1 ...
I1216 19:45:42.920436   52777 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.920447   52777 out.go:358] Setting ErrFile to fd 2...
I1216 19:45:42.920453   52777 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:45:42.920709   52777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
I1216 19:45:42.921413   52777 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.927750   52777 config.go:182] Loaded profile config "functional-690644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
I1216 19:45:42.928379   52777 cli_runner.go:164] Run: docker container inspect functional-690644 --format={{.State.Status}}
I1216 19:45:42.964733   52777 ssh_runner.go:195] Run: systemctl --version
I1216 19:45:42.964786   52777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-690644
I1216 19:45:43.009077   52777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/functional-690644/id_rsa Username:docker}
I1216 19:45:43.114771   52777 build_images.go:161] Building image from path: /tmp/build.403730993.tar
I1216 19:45:43.114840   52777 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 19:45:43.124439   52777 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.403730993.tar
I1216 19:45:43.128288   52777 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.403730993.tar: stat -c "%s %y" /var/lib/minikube/build/build.403730993.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.403730993.tar': No such file or directory
I1216 19:45:43.128319   52777 ssh_runner.go:362] scp /tmp/build.403730993.tar --> /var/lib/minikube/build/build.403730993.tar (3072 bytes)
I1216 19:45:43.155039   52777 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.403730993
I1216 19:45:43.167247   52777 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.403730993 -xf /var/lib/minikube/build/build.403730993.tar
I1216 19:45:43.177867   52777 docker.go:360] Building image: /var/lib/minikube/build/build.403730993
I1216 19:45:43.177937   52777 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-690644 /var/lib/minikube/build/build.403730993
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:86eb5fdeefd51dc9840e5c86f4351d63ea59614beaa9ca392fb036e60f260d35
#8 writing image sha256:86eb5fdeefd51dc9840e5c86f4351d63ea59614beaa9ca392fb036e60f260d35 done
#8 naming to localhost/my-image:functional-690644 done
#8 DONE 0.1s
I1216 19:45:45.809573   52777 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-690644 /var/lib/minikube/build/build.403730993: (2.631613768s)
I1216 19:45:45.809650   52777 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.403730993
I1216 19:45:45.819578   52777 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.403730993.tar
I1216 19:45:45.829415   52777 build_images.go:217] Built localhost/my-image:functional-690644 from /tmp/build.403730993.tar
I1216 19:45:45.829452   52777 build_images.go:133] succeeded building to: functional-690644
I1216 19:45:45.829467   52777 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-690644
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image load --daemon kicbase/echo-server:functional-690644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image load --daemon kicbase/echo-server:functional-690644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-690644
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image load --daemon kicbase/echo-server:functional-690644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image save kicbase/echo-server:functional-690644 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image rm kicbase/echo-server:functional-690644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls
2024/12/16 19:45:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-690644 docker-env) && out/minikube-linux-arm64 status -p functional-690644"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-690644 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-690644
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 image save --daemon kicbase/echo-server:functional-690644 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-690644
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-690644 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-690644
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-690644
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-690644
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (128.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-082404 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E1216 19:46:35.292889    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-082404 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m7.786588291s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (128.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-082404 -- rollout status deployment/busybox: (5.316708595s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-cscgq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-f7kww -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-mdgdk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-cscgq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-f7kww -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-mdgdk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-cscgq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-f7kww -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-mdgdk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-cscgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-cscgq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-f7kww -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-f7kww -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-mdgdk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-082404 -- exec busybox-58667487b6-mdgdk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-082404 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-082404 -v=7 --alsologtostderr: (26.230588847s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr: (1.132749934s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-082404 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.108975975s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 status --output json -v=7 --alsologtostderr: (1.107245157s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp testdata/cp-test.txt ha-082404:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3791437405/001/cp-test_ha-082404.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404:/home/docker/cp-test.txt ha-082404-m02:/home/docker/cp-test_ha-082404_ha-082404-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test_ha-082404_ha-082404-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404:/home/docker/cp-test.txt ha-082404-m03:/home/docker/cp-test_ha-082404_ha-082404-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test_ha-082404_ha-082404-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404:/home/docker/cp-test.txt ha-082404-m04:/home/docker/cp-test_ha-082404_ha-082404-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test_ha-082404_ha-082404-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp testdata/cp-test.txt ha-082404-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3791437405/001/cp-test_ha-082404-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m02:/home/docker/cp-test.txt ha-082404:/home/docker/cp-test_ha-082404-m02_ha-082404.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test_ha-082404-m02_ha-082404.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m02:/home/docker/cp-test.txt ha-082404-m03:/home/docker/cp-test_ha-082404-m02_ha-082404-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test_ha-082404-m02_ha-082404-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m02:/home/docker/cp-test.txt ha-082404-m04:/home/docker/cp-test_ha-082404-m02_ha-082404-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test_ha-082404-m02_ha-082404-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp testdata/cp-test.txt ha-082404-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3791437405/001/cp-test_ha-082404-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m03:/home/docker/cp-test.txt ha-082404:/home/docker/cp-test_ha-082404-m03_ha-082404.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test_ha-082404-m03_ha-082404.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m03:/home/docker/cp-test.txt ha-082404-m02:/home/docker/cp-test_ha-082404-m03_ha-082404-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test_ha-082404-m03_ha-082404-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m03:/home/docker/cp-test.txt ha-082404-m04:/home/docker/cp-test_ha-082404-m03_ha-082404-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test.txt"
E1216 19:48:51.430510    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test_ha-082404-m03_ha-082404-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp testdata/cp-test.txt ha-082404-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3791437405/001/cp-test_ha-082404-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt ha-082404:/home/docker/cp-test_ha-082404-m04_ha-082404.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404 "sudo cat /home/docker/cp-test_ha-082404-m04_ha-082404.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt ha-082404-m02:/home/docker/cp-test_ha-082404-m04_ha-082404-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m02 "sudo cat /home/docker/cp-test_ha-082404-m04_ha-082404-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 cp ha-082404-m04:/home/docker/cp-test.txt ha-082404-m03:/home/docker/cp-test_ha-082404-m04_ha-082404-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 ssh -n ha-082404-m03 "sudo cat /home/docker/cp-test_ha-082404-m04_ha-082404-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 node stop m02 -v=7 --alsologtostderr: (10.974773511s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr: exit status 7 (785.165586ms)

                                                
                                                
-- stdout --
	ha-082404
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-082404-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-082404-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-082404-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 19:49:08.006084   75709 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:49:08.006289   75709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:49:08.006317   75709 out.go:358] Setting ErrFile to fd 2...
	I1216 19:49:08.006337   75709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:49:08.006613   75709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 19:49:08.006874   75709 out.go:352] Setting JSON to false
	I1216 19:49:08.006940   75709 mustload.go:65] Loading cluster: ha-082404
	I1216 19:49:08.007034   75709 notify.go:220] Checking for updates...
	I1216 19:49:08.007460   75709 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:49:08.007503   75709 status.go:174] checking status of ha-082404 ...
	I1216 19:49:08.008105   75709 cli_runner.go:164] Run: docker container inspect ha-082404 --format={{.State.Status}}
	I1216 19:49:08.035352   75709 status.go:371] ha-082404 host status = "Running" (err=<nil>)
	I1216 19:49:08.035376   75709 host.go:66] Checking if "ha-082404" exists ...
	I1216 19:49:08.035882   75709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404
	I1216 19:49:08.080353   75709 host.go:66] Checking if "ha-082404" exists ...
	I1216 19:49:08.080799   75709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:49:08.080891   75709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404
	I1216 19:49:08.102507   75709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404/id_rsa Username:docker}
	I1216 19:49:08.203490   75709 ssh_runner.go:195] Run: systemctl --version
	I1216 19:49:08.207977   75709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:49:08.220432   75709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 19:49:08.281306   75709 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-16 19:49:08.270667213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 19:49:08.281946   75709 kubeconfig.go:125] found "ha-082404" server: "https://192.168.49.254:8443"
	I1216 19:49:08.281982   75709 api_server.go:166] Checking apiserver status ...
	I1216 19:49:08.282029   75709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 19:49:08.294259   75709 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2426/cgroup
	I1216 19:49:08.304004   75709 api_server.go:182] apiserver freezer: "12:freezer:/docker/df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1/kubepods/burstable/podf38ec677dc4742a89b84a027e0e4241f/0f446c9970f7ee0002c9ae729e772c5d161a3f8d329131d6ff1753e90d123c36"
	I1216 19:49:08.304076   75709 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/df79637e07d1fa9b770fdad3a3220b4d498aee0558c4946d136f873d151dccd1/kubepods/burstable/podf38ec677dc4742a89b84a027e0e4241f/0f446c9970f7ee0002c9ae729e772c5d161a3f8d329131d6ff1753e90d123c36/freezer.state
	I1216 19:49:08.313261   75709 api_server.go:204] freezer state: "THAWED"
	I1216 19:49:08.313298   75709 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 19:49:08.322328   75709 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 19:49:08.322365   75709 status.go:463] ha-082404 apiserver status = Running (err=<nil>)
	I1216 19:49:08.322378   75709 status.go:176] ha-082404 status: &{Name:ha-082404 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:49:08.322395   75709 status.go:174] checking status of ha-082404-m02 ...
	I1216 19:49:08.322764   75709 cli_runner.go:164] Run: docker container inspect ha-082404-m02 --format={{.State.Status}}
	I1216 19:49:08.342517   75709 status.go:371] ha-082404-m02 host status = "Stopped" (err=<nil>)
	I1216 19:49:08.342543   75709 status.go:384] host is not running, skipping remaining checks
	I1216 19:49:08.342550   75709 status.go:176] ha-082404-m02 status: &{Name:ha-082404-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:49:08.342569   75709 status.go:174] checking status of ha-082404-m03 ...
	I1216 19:49:08.342872   75709 cli_runner.go:164] Run: docker container inspect ha-082404-m03 --format={{.State.Status}}
	I1216 19:49:08.363125   75709 status.go:371] ha-082404-m03 host status = "Running" (err=<nil>)
	I1216 19:49:08.363149   75709 host.go:66] Checking if "ha-082404-m03" exists ...
	I1216 19:49:08.363541   75709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m03
	I1216 19:49:08.383606   75709 host.go:66] Checking if "ha-082404-m03" exists ...
	I1216 19:49:08.384081   75709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:49:08.384124   75709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m03
	I1216 19:49:08.401185   75709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m03/id_rsa Username:docker}
	I1216 19:49:08.502921   75709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:49:08.515293   75709 kubeconfig.go:125] found "ha-082404" server: "https://192.168.49.254:8443"
	I1216 19:49:08.515320   75709 api_server.go:166] Checking apiserver status ...
	I1216 19:49:08.515362   75709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 19:49:08.528570   75709 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2298/cgroup
	I1216 19:49:08.538587   75709 api_server.go:182] apiserver freezer: "12:freezer:/docker/96044188b98ac85f38f14601885e4044fedd8b4f2bb921a4867638f31e7874b3/kubepods/burstable/pod56552c1ae280aecb3dcb8a952ff01b06/557204dad376b6783863acbaa60fcb81dc7919d1d99ec312454cfa5860f775fc"
	I1216 19:49:08.538690   75709 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/96044188b98ac85f38f14601885e4044fedd8b4f2bb921a4867638f31e7874b3/kubepods/burstable/pod56552c1ae280aecb3dcb8a952ff01b06/557204dad376b6783863acbaa60fcb81dc7919d1d99ec312454cfa5860f775fc/freezer.state
	I1216 19:49:08.547461   75709 api_server.go:204] freezer state: "THAWED"
	I1216 19:49:08.547494   75709 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 19:49:08.555589   75709 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 19:49:08.555659   75709 status.go:463] ha-082404-m03 apiserver status = Running (err=<nil>)
	I1216 19:49:08.555684   75709 status.go:176] ha-082404-m03 status: &{Name:ha-082404-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:49:08.555729   75709 status.go:174] checking status of ha-082404-m04 ...
	I1216 19:49:08.556072   75709 cli_runner.go:164] Run: docker container inspect ha-082404-m04 --format={{.State.Status}}
	I1216 19:49:08.573159   75709 status.go:371] ha-082404-m04 host status = "Running" (err=<nil>)
	I1216 19:49:08.573182   75709 host.go:66] Checking if "ha-082404-m04" exists ...
	I1216 19:49:08.573470   75709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-082404-m04
	I1216 19:49:08.603465   75709 host.go:66] Checking if "ha-082404-m04" exists ...
	I1216 19:49:08.603766   75709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:49:08.603809   75709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-082404-m04
	I1216 19:49:08.622919   75709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/ha-082404-m04/id_rsa Username:docker}
	I1216 19:49:08.722824   75709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:49:08.735257   75709 status.go:176] ha-082404-m04 status: &{Name:ha-082404-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 node start m02 -v=7 --alsologtostderr
E1216 19:49:19.134321    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 node start m02 -v=7 --alsologtostderr: (35.545394914s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr: (1.608446444s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1216 19:49:47.036154    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:47.042502    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:47.053810    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:47.075310    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:47.116585    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:47.198003    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:47.359451    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:47.681088    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.254049317s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (256.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-082404 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-082404 -v=7 --alsologtostderr
E1216 19:49:48.322838    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:49.604833    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:52.167090    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:49:57.288863    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:50:07.531030    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-082404 -v=7 --alsologtostderr: (34.426797719s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-082404 --wait=true -v=7 --alsologtostderr
E1216 19:50:28.013241    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:51:08.975153    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:52:30.899042    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:53:51.430702    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-082404 --wait=true -v=7 --alsologtostderr: (3m42.223285294s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-082404
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (256.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 node delete m03 -v=7 --alsologtostderr: (10.718077988s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 stop -v=7 --alsologtostderr
E1216 19:54:47.036141    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 stop -v=7 --alsologtostderr: (32.731692976s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr: exit status 7 (122.777051ms)

                                                
                                                
-- stdout --
	ha-082404
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-082404-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-082404-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 19:54:50.108211  103657 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:54:50.108399  103657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:54:50.108433  103657 out.go:358] Setting ErrFile to fd 2...
	I1216 19:54:50.108454  103657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:54:50.108743  103657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 19:54:50.108977  103657 out.go:352] Setting JSON to false
	I1216 19:54:50.109051  103657 mustload.go:65] Loading cluster: ha-082404
	I1216 19:54:50.109117  103657 notify.go:220] Checking for updates...
	I1216 19:54:50.109552  103657 config.go:182] Loaded profile config "ha-082404": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 19:54:50.109590  103657 status.go:174] checking status of ha-082404 ...
	I1216 19:54:50.110232  103657 cli_runner.go:164] Run: docker container inspect ha-082404 --format={{.State.Status}}
	I1216 19:54:50.130088  103657 status.go:371] ha-082404 host status = "Stopped" (err=<nil>)
	I1216 19:54:50.130110  103657 status.go:384] host is not running, skipping remaining checks
	I1216 19:54:50.130117  103657 status.go:176] ha-082404 status: &{Name:ha-082404 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:54:50.130161  103657 status.go:174] checking status of ha-082404-m02 ...
	I1216 19:54:50.130480  103657 cli_runner.go:164] Run: docker container inspect ha-082404-m02 --format={{.State.Status}}
	I1216 19:54:50.157946  103657 status.go:371] ha-082404-m02 host status = "Stopped" (err=<nil>)
	I1216 19:54:50.157966  103657 status.go:384] host is not running, skipping remaining checks
	I1216 19:54:50.157981  103657 status.go:176] ha-082404-m02 status: &{Name:ha-082404-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:54:50.158001  103657 status.go:174] checking status of ha-082404-m04 ...
	I1216 19:54:50.158341  103657 cli_runner.go:164] Run: docker container inspect ha-082404-m04 --format={{.State.Status}}
	I1216 19:54:50.176867  103657 status.go:371] ha-082404-m04 host status = "Stopped" (err=<nil>)
	I1216 19:54:50.176887  103657 status.go:384] host is not running, skipping remaining checks
	I1216 19:54:50.176894  103657 status.go:176] ha-082404-m04 status: &{Name:ha-082404-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.129409191s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-082404 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-082404 --control-plane -v=7 --alsologtostderr: (43.4977363s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-082404 status -v=7 --alsologtostderr: (1.051952077s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.109279342s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-758310 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-758310 --driver=docker  --container-runtime=docker: (31.010999759s)
--- PASS: TestImageBuild/serial/Setup (31.01s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-758310
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-758310: (2.416516768s)
--- PASS: TestImageBuild/serial/NormalBuild (2.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-758310
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-758310: (1.187513706s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-758310
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-758310
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-284228 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E1216 19:58:51.430080    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-284228 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (49.820201529s)
--- PASS: TestJSONOutput/start/Command (49.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-284228 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-284228 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-284228 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-284228 --output=json --user=testUser: (10.889297317s)
--- PASS: TestJSONOutput/stop/Command (10.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-876595 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-876595 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.676923ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43badedf-a16d-458b-9127-62f46135b6fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-876595] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee123047-8e05-4c75-8f4a-bc5562cbe3c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20091"}}
	{"specversion":"1.0","id":"1d6a5814-34db-433f-89f9-2050bfed1c8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1685c79a-bb94-416a-8670-01dbbf002302","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig"}}
	{"specversion":"1.0","id":"30164ef3-09d0-47f9-a0dc-0af66e22942a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube"}}
	{"specversion":"1.0","id":"64c31d82-a079-4a5d-8f0b-1aa9c889df99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7219c2ea-4e35-43c5-81b6-0bbe2259ff0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f5a8a720-02dd-4d33-ae52-56f9d56ed710","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-876595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-876595
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-413756 --network=
E1216 19:59:47.036098    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-413756 --network=: (37.602669593s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-413756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-413756
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-413756: (2.52919857s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.16s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-207717 --network=bridge
E1216 20:00:14.495699    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-207717 --network=bridge: (33.171595859s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-207717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-207717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-207717: (2.032876122s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.23s)

                                                
                                    
x
+
TestKicExistingNetwork (31.81s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 20:00:36.863744    7569 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 20:00:36.884948    7569 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 20:00:36.885040    7569 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 20:00:36.886301    7569 cli_runner.go:164] Run: docker network inspect existing-network
W1216 20:00:36.902206    7569 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 20:00:36.902236    7569 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 20:00:36.902252    7569 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 20:00:36.902455    7569 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 20:00:36.918710    7569 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a15d316ef218 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5e:7e:65:ba} reservation:<nil>}
I1216 20:00:36.920122    7569 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017a2570}
I1216 20:00:36.920160    7569 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 20:00:36.920216    7569 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 20:00:36.995534    7569 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-559798 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-559798 --network=existing-network: (29.579453005s)
helpers_test.go:175: Cleaning up "existing-network-559798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-559798
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-559798: (2.060832256s)
I1216 20:01:08.653731    7569 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.81s)

                                                
                                    
x
+
TestKicCustomSubnet (34.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-422349 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-422349 --subnet=192.168.60.0/24: (32.754769573s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-422349 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-422349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-422349
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-422349: (2.209964253s)
--- PASS: TestKicCustomSubnet (34.99s)

                                                
                                    
x
+
TestKicStaticIP (33.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-716966 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-716966 --static-ip=192.168.200.200: (31.463586862s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-716966 ip
helpers_test.go:175: Cleaning up "static-ip-716966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-716966
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-716966: (2.072217231s)
--- PASS: TestKicStaticIP (33.71s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-740424 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-740424 --driver=docker  --container-runtime=docker: (33.21630382s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-742877 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-742877 --driver=docker  --container-runtime=docker: (36.312997742s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-740424
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-742877
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-742877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-742877
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-742877: (2.220566579s)
helpers_test.go:175: Cleaning up "first-740424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-740424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-740424: (2.095755774s)
--- PASS: TestMinikubeProfile (75.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-351845 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-351845 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.673902769s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-351845 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-353626 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-353626 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.060872311s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-353626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-351845 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-351845 --alsologtostderr -v=5: (1.475478264s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-353626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-353626
E1216 20:03:51.430024    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-353626: (1.200079836s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-353626
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-353626: (7.218270604s)
--- PASS: TestMountStart/serial/RestartStopped (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-353626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-943325 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1216 20:04:47.035937    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-943325 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.320922635s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (48.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-943325 -- rollout status deployment/busybox: (4.840508869s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:28.492726    7569 retry.go:31] will retry after 993.021801ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:29.653500    7569 retry.go:31] will retry after 1.776358863s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:31.598172    7569 retry.go:31] will retry after 2.951593904s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:34.695781    7569 retry.go:31] will retry after 2.481655285s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:37.348136    7569 retry.go:31] will retry after 3.371433709s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:40.871620    7569 retry.go:31] will retry after 4.703080919s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:45.719776    7569 retry.go:31] will retry after 6.930760833s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I1216 20:05:52.796505    7569 retry.go:31] will retry after 17.330236213s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E1216 20:06:10.102716    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6lvzh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6rphx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6lvzh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6rphx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6lvzh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6rphx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (48.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6lvzh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6lvzh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6rphx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-943325 -- exec busybox-58667487b6-6rphx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-943325 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-943325 -v 3 --alsologtostderr: (17.719757964s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-943325 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp testdata/cp-test.txt multinode-943325:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile59934987/001/cp-test_multinode-943325.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325:/home/docker/cp-test.txt multinode-943325-m02:/home/docker/cp-test_multinode-943325_multinode-943325-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m02 "sudo cat /home/docker/cp-test_multinode-943325_multinode-943325-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325:/home/docker/cp-test.txt multinode-943325-m03:/home/docker/cp-test_multinode-943325_multinode-943325-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m03 "sudo cat /home/docker/cp-test_multinode-943325_multinode-943325-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp testdata/cp-test.txt multinode-943325-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile59934987/001/cp-test_multinode-943325-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325-m02:/home/docker/cp-test.txt multinode-943325:/home/docker/cp-test_multinode-943325-m02_multinode-943325.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325 "sudo cat /home/docker/cp-test_multinode-943325-m02_multinode-943325.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325-m02:/home/docker/cp-test.txt multinode-943325-m03:/home/docker/cp-test_multinode-943325-m02_multinode-943325-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m03 "sudo cat /home/docker/cp-test_multinode-943325-m02_multinode-943325-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp testdata/cp-test.txt multinode-943325-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile59934987/001/cp-test_multinode-943325-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325-m03:/home/docker/cp-test.txt multinode-943325:/home/docker/cp-test_multinode-943325-m03_multinode-943325.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325 "sudo cat /home/docker/cp-test_multinode-943325-m03_multinode-943325.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 cp multinode-943325-m03:/home/docker/cp-test.txt multinode-943325-m02:/home/docker/cp-test_multinode-943325-m03_multinode-943325-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 ssh -n multinode-943325-m02 "sudo cat /home/docker/cp-test_multinode-943325-m03_multinode-943325-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-943325 node stop m03: (1.228385325s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-943325 status: exit status 7 (518.394653ms)

                                                
                                                
-- stdout --
	multinode-943325
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-943325-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-943325-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr: exit status 7 (618.366574ms)

                                                
                                                
-- stdout --
	multinode-943325
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-943325-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-943325-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:06:44.852403  180374 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:06:44.852570  180374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:06:44.852578  180374 out.go:358] Setting ErrFile to fd 2...
	I1216 20:06:44.852584  180374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:06:44.853110  180374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 20:06:44.853459  180374 out.go:352] Setting JSON to false
	I1216 20:06:44.853531  180374 mustload.go:65] Loading cluster: multinode-943325
	I1216 20:06:44.853725  180374 notify.go:220] Checking for updates...
	I1216 20:06:44.854730  180374 config.go:182] Loaded profile config "multinode-943325": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 20:06:44.854789  180374 status.go:174] checking status of multinode-943325 ...
	I1216 20:06:44.855821  180374 cli_runner.go:164] Run: docker container inspect multinode-943325 --format={{.State.Status}}
	I1216 20:06:44.883756  180374 status.go:371] multinode-943325 host status = "Running" (err=<nil>)
	I1216 20:06:44.883779  180374 host.go:66] Checking if "multinode-943325" exists ...
	I1216 20:06:44.884093  180374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-943325
	I1216 20:06:44.909050  180374 host.go:66] Checking if "multinode-943325" exists ...
	I1216 20:06:44.909446  180374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 20:06:44.909503  180374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-943325
	I1216 20:06:44.928848  180374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/multinode-943325/id_rsa Username:docker}
	I1216 20:06:45.040936  180374 ssh_runner.go:195] Run: systemctl --version
	I1216 20:06:45.046619  180374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:06:45.065175  180374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 20:06:45.145541  180374 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-16 20:06:45.125780364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 20:06:45.146265  180374 kubeconfig.go:125] found "multinode-943325" server: "https://192.168.67.2:8443"
	I1216 20:06:45.146315  180374 api_server.go:166] Checking apiserver status ...
	I1216 20:06:45.146370  180374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:06:45.166350  180374 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2341/cgroup
	I1216 20:06:45.179554  180374 api_server.go:182] apiserver freezer: "12:freezer:/docker/afd86b43d3dd5d05f1d5e4a07c87d4ff98b45d2bb700c09bb6fb6cfd9000f68e/kubepods/burstable/pod56224bd72ba900a1c6b6ca66c65d7b98/d22a521b805734a014b5f3883ba37194e1360b18bc8f765c3ed35ceed21c35e4"
	I1216 20:06:45.179636  180374 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/afd86b43d3dd5d05f1d5e4a07c87d4ff98b45d2bb700c09bb6fb6cfd9000f68e/kubepods/burstable/pod56224bd72ba900a1c6b6ca66c65d7b98/d22a521b805734a014b5f3883ba37194e1360b18bc8f765c3ed35ceed21c35e4/freezer.state
	I1216 20:06:45.190630  180374 api_server.go:204] freezer state: "THAWED"
	I1216 20:06:45.190665  180374 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 20:06:45.198869  180374 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 20:06:45.198909  180374 status.go:463] multinode-943325 apiserver status = Running (err=<nil>)
	I1216 20:06:45.198930  180374 status.go:176] multinode-943325 status: &{Name:multinode-943325 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:06:45.198954  180374 status.go:174] checking status of multinode-943325-m02 ...
	I1216 20:06:45.199440  180374 cli_runner.go:164] Run: docker container inspect multinode-943325-m02 --format={{.State.Status}}
	I1216 20:06:45.220238  180374 status.go:371] multinode-943325-m02 host status = "Running" (err=<nil>)
	I1216 20:06:45.220273  180374 host.go:66] Checking if "multinode-943325-m02" exists ...
	I1216 20:06:45.220604  180374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-943325-m02
	I1216 20:06:45.242938  180374 host.go:66] Checking if "multinode-943325-m02" exists ...
	I1216 20:06:45.243369  180374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 20:06:45.243440  180374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-943325-m02
	I1216 20:06:45.265991  180374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20091-2258/.minikube/machines/multinode-943325-m02/id_rsa Username:docker}
	I1216 20:06:45.381539  180374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:06:45.395615  180374 status.go:176] multinode-943325-m02 status: &{Name:multinode-943325-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:06:45.395659  180374 status.go:174] checking status of multinode-943325-m03 ...
	I1216 20:06:45.396017  180374 cli_runner.go:164] Run: docker container inspect multinode-943325-m03 --format={{.State.Status}}
	I1216 20:06:45.413776  180374 status.go:371] multinode-943325-m03 host status = "Stopped" (err=<nil>)
	I1216 20:06:45.413799  180374 status.go:384] host is not running, skipping remaining checks
	I1216 20:06:45.413805  180374 status.go:176] multinode-943325-m03 status: &{Name:multinode-943325-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-943325 node start m03 -v=7 --alsologtostderr: (10.232857778s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-943325
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-943325
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-943325: (22.697346953s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-943325 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-943325 --wait=true -v=8 --alsologtostderr: (1m22.706212628s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-943325
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-943325 node delete m03: (5.097405615s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 stop
E1216 20:08:51.430213    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-943325 stop: (21.415441751s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-943325 status: exit status 7 (92.434616ms)

                                                
                                                
-- stdout --
	multinode-943325
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-943325-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr: exit status 7 (93.915086ms)

                                                
                                                
-- stdout --
	multinode-943325
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-943325-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:09:09.418259  194192 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:09:09.418400  194192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:09:09.418414  194192 out.go:358] Setting ErrFile to fd 2...
	I1216 20:09:09.418419  194192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:09:09.418699  194192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-2258/.minikube/bin
	I1216 20:09:09.418903  194192 out.go:352] Setting JSON to false
	I1216 20:09:09.418932  194192 mustload.go:65] Loading cluster: multinode-943325
	I1216 20:09:09.418974  194192 notify.go:220] Checking for updates...
	I1216 20:09:09.419362  194192 config.go:182] Loaded profile config "multinode-943325": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 20:09:09.419383  194192 status.go:174] checking status of multinode-943325 ...
	I1216 20:09:09.419960  194192 cli_runner.go:164] Run: docker container inspect multinode-943325 --format={{.State.Status}}
	I1216 20:09:09.439045  194192 status.go:371] multinode-943325 host status = "Stopped" (err=<nil>)
	I1216 20:09:09.439067  194192 status.go:384] host is not running, skipping remaining checks
	I1216 20:09:09.439074  194192 status.go:176] multinode-943325 status: &{Name:multinode-943325 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:09:09.439115  194192 status.go:174] checking status of multinode-943325-m02 ...
	I1216 20:09:09.439421  194192 cli_runner.go:164] Run: docker container inspect multinode-943325-m02 --format={{.State.Status}}
	I1216 20:09:09.462126  194192 status.go:371] multinode-943325-m02 host status = "Stopped" (err=<nil>)
	I1216 20:09:09.462153  194192 status.go:384] host is not running, skipping remaining checks
	I1216 20:09:09.462160  194192 status.go:176] multinode-943325-m02 status: &{Name:multinode-943325-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-943325 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1216 20:09:47.036169    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-943325 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (50.920652604s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-943325 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-943325
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-943325-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-943325-m02 --driver=docker  --container-runtime=docker: exit status 14 (101.856888ms)

                                                
                                                
-- stdout --
	* [multinode-943325-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-943325-m02' is duplicated with machine name 'multinode-943325-m02' in profile 'multinode-943325'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-943325-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-943325-m03 --driver=docker  --container-runtime=docker: (31.788261032s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-943325
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-943325: exit status 80 (327.209682ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-943325 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-943325-m03 already exists in multinode-943325-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-943325-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-943325-m03: (1.757327892s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.04s)

                                                
                                    
x
+
TestPreload (106.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-731436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-731436 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m8.849294006s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-731436 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-731436 image pull gcr.io/k8s-minikube/busybox: (2.3189324s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-731436
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-731436: (10.734485983s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-731436 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-731436 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (22.289060086s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-731436 image list
helpers_test.go:175: Cleaning up "test-preload-731436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-731436
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-731436: (2.186471535s)
--- PASS: TestPreload (106.70s)

                                                
                                    
x
+
TestScheduledStopUnix (105.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-176197 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-176197 --memory=2048 --driver=docker  --container-runtime=docker: (32.105968143s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-176197 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-176197 -n scheduled-stop-176197
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-176197 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1216 20:12:58.467916    7569 retry.go:31] will retry after 129.092µs: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.468332    7569 retry.go:31] will retry after 80.725µs: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.469863    7569 retry.go:31] will retry after 203.845µs: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.470453    7569 retry.go:31] will retry after 204.715µs: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.470916    7569 retry.go:31] will retry after 414.502µs: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.472065    7569 retry.go:31] will retry after 593.528µs: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.473231    7569 retry.go:31] will retry after 1.563019ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.475461    7569 retry.go:31] will retry after 1.11846ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.477656    7569 retry.go:31] will retry after 2.560224ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.480895    7569 retry.go:31] will retry after 3.601835ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.485443    7569 retry.go:31] will retry after 3.846061ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.489670    7569 retry.go:31] will retry after 9.628466ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.499905    7569 retry.go:31] will retry after 19.120882ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.520679    7569 retry.go:31] will retry after 22.355856ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
I1216 20:12:58.543925    7569 retry.go:31] will retry after 38.132042ms: open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/scheduled-stop-176197/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-176197 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-176197 -n scheduled-stop-176197
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-176197
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-176197 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1216 20:13:51.430905    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-176197
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-176197: exit status 7 (73.583102ms)

                                                
                                                
-- stdout --
	scheduled-stop-176197
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-176197 -n scheduled-stop-176197
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-176197 -n scheduled-stop-176197: exit status 7 (69.689388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-176197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-176197
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-176197: (1.677324647s)
--- PASS: TestScheduledStopUnix (105.38s)

                                                
                                    
x
+
TestSkaffold (119.49s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2668684494 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-812012 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-812012 --memory=2600 --driver=docker  --container-runtime=docker: (33.024657705s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2668684494 run --minikube-profile skaffold-812012 --kube-context skaffold-812012 --status-check=true --port-forward=false --interactive=false
E1216 20:14:47.036370    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2668684494 run --minikube-profile skaffold-812012 --kube-context skaffold-812012 --status-check=true --port-forward=false --interactive=false: (1m10.79614487s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7c65d77b6d-ftk8f" [185cd543-b447-4f6b-ba0e-808850067344] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003298231s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-55f4bff458-th9rx" [5c7b1cd1-efba-49f4-98e2-8c93c172b8ef] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003136088s
helpers_test.go:175: Cleaning up "skaffold-812012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-812012
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-812012: (2.936084408s)
--- PASS: TestSkaffold (119.49s)

                                                
                                    
x
+
TestInsufficientStorage (13.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-389168 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-389168 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.344543185s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7836f37e-3e6f-4565-807b-dcff801087b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-389168] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"13744fe7-4d2d-4c9c-aac0-12f36a7fcd87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20091"}}
	{"specversion":"1.0","id":"018c558b-baeb-4eb2-85df-7ae0d0ac3b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"73d4df39-545f-42ed-a98d-6797f4d5357c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig"}}
	{"specversion":"1.0","id":"68b4f050-9585-4a3c-a687-9e87e64b9e0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube"}}
	{"specversion":"1.0","id":"9be08eac-1d11-4384-aad4-9114e46fee8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e095779b-3418-450e-9a96-52cde6a81fa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cf7c63bb-f155-433b-a499-78e1e8351f1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5b6a7ed6-3baf-4e5e-a79e-ac2162460d4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4b7a9a93-abb5-4a9b-be32-c21bf466cc68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b10796e1-6776-4f2d-a3f2-fc9ed8356443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a8972cb2-030b-4983-9d13-d50d5ee09aaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-389168\" primary control-plane node in \"insufficient-storage-389168\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"77c784b2-9c2e-467a-bb1e-52c372429f7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1734029593-20090 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"074ee306-a417-4ed2-97e3-27a494d2b365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fa4e0b7-d172-4915-b900-d3496fd16e9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-389168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-389168 --output=json --layout=cluster: exit status 7 (284.830138ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-389168","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-389168","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:16:22.316172  228890 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-389168" does not appear in /home/jenkins/minikube-integration/20091-2258/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-389168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-389168 --output=json --layout=cluster: exit status 7 (293.70919ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-389168","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-389168","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:16:22.611520  228952 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-389168" does not appear in /home/jenkins/minikube-integration/20091-2258/kubeconfig
	E1216 20:16:22.622001  228952 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/insufficient-storage-389168/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-389168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-389168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-389168: (1.702809402s)
--- PASS: TestInsufficientStorage (13.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1231559504 start -p running-upgrade-196032 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1216 20:21:37.720912    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:22:18.682784    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1231559504 start -p running-upgrade-196032 --memory=2200 --vm-driver=docker  --container-runtime=docker: (43.848094959s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-196032 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1216 20:22:50.104069    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-196032 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.706214562s)
helpers_test.go:175: Cleaning up "running-upgrade-196032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-196032
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-196032: (2.445749786s)
--- PASS: TestRunningBinaryUpgrade (87.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (388.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1216 20:18:51.430576    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m2.00510041s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-634605
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-634605: (10.768671439s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-634605 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-634605 status --format={{.Host}}: exit status 7 (70.879965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1216 20:19:47.036065    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m40.048763939s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-634605 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (130.864053ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-634605] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-634605
	    minikube start -p kubernetes-upgrade-634605 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6346052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-634605 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1216 20:23:51.430944    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-634605 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.719545024s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-634605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-634605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-634605: (3.13956033s)
--- PASS: TestKubernetesUpgrade (388.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (163.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.429352439 start -p missing-upgrade-154423 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.429352439 start -p missing-upgrade-154423 --memory=2200 --driver=docker  --container-runtime=docker: (1m32.354712743s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-154423
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-154423: (10.425819594s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-154423
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-154423 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-154423 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (56.99026899s)
helpers_test.go:175: Cleaning up "missing-upgrade-154423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-154423
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-154423: (2.284415786s)
--- PASS: TestMissingContainerUpgrade (163.64s)

                                                
                                    
x
+
TestPause/serial/Start (50.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-161951 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1216 20:16:54.497943    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-161951 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (50.066060295s)
--- PASS: TestPause/serial/Start (50.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-161951 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-161951 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.412442816s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-161951 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-161951 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-161951 --output=json --layout=cluster: exit status 2 (443.517343ms)

                                                
                                                
-- stdout --
	{"Name":"pause-161951","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-161951","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-161951 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-161951 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-161951 --alsologtostderr -v=5: (1.089032647s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-161951 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-161951 --alsologtostderr -v=5: (2.807391538s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-161951
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-161951: exit status 1 (17.04973ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-161951: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1926284258 start -p stopped-upgrade-249072 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1926284258 start -p stopped-upgrade-249072 --memory=2200 --vm-driver=docker  --container-runtime=docker: (40.81217482s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1926284258 -p stopped-upgrade-249072 stop
E1216 20:20:56.744406    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:56.750865    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:56.762366    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:56.783841    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:56.825289    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:56.906827    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:57.068160    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:57.390074    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:58.031404    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:20:59.313083    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1926284258 -p stopped-upgrade-249072 stop: (10.928607597s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-249072 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1216 20:21:01.875133    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:21:06.997417    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:21:17.238644    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-249072 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.957466226s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-249072
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-249072: (1.401959343s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223057 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-223057 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (153.271567ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-223057] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-2258/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-2258/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223057 --driver=docker  --container-runtime=docker
E1216 20:23:40.604099    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223057 --driver=docker  --container-runtime=docker: (43.038480486s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-223057 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223057 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223057 --no-kubernetes --driver=docker  --container-runtime=docker: (15.222402658s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-223057 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-223057 status -o json: exit status 2 (346.290227ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-223057","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-223057
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-223057: (1.849878075s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223057 --no-kubernetes --driver=docker  --container-runtime=docker
E1216 20:24:47.035942    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223057 --no-kubernetes --driver=docker  --container-runtime=docker: (10.746245404s)
--- PASS: TestNoKubernetes/serial/Start (10.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-223057 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-223057 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.022256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-223057
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-223057: (1.277926615s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223057 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223057 --driver=docker  --container-runtime=docker: (8.726178906s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-223057 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-223057 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.038065ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-157979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-157979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m47.931947623s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-692926 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-692926 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m25.275101242s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (85.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-157979 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [df86378e-3166-40b0-a711-8aa8efebcb24] Pending
helpers_test.go:344: "busybox" [df86378e-3166-40b0-a711-8aa8efebcb24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [df86378e-3166-40b0-a711-8aa8efebcb24] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 13.003365801s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-157979 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (13.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-157979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-157979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.375993792s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-157979 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-157979 --alsologtostderr -v=3
E1216 20:29:47.036551    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-157979 --alsologtostderr -v=3: (11.325145752s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-157979 -n old-k8s-version-157979
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-157979 -n old-k8s-version-157979: exit status 7 (77.286979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-157979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-157979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-157979 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (6m13.391058987s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-157979 -n old-k8s-version-157979
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (373.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-692926 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [488e52b6-ef7b-4bfc-825a-2b420776b2ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [488e52b6-ef7b-4bfc-825a-2b420776b2ee] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004465466s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-692926 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-692926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-692926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050855179s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-692926 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-692926 --alsologtostderr -v=3
E1216 20:30:56.743515    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-692926 --alsologtostderr -v=3: (11.076089926s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-692926 -n no-preload-692926
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-692926 -n no-preload-692926: exit status 7 (80.485654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-692926 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-692926 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E1216 20:33:34.499447    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:33:51.430084    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:34:47.037513    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-692926 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (4m26.313530145s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-692926 -n no-preload-692926
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qtwzx" [af2acbb3-bca8-4fc5-815b-26115191b98a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00361807s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qtwzx" [af2acbb3-bca8-4fc5-815b-26115191b98a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003879681s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-692926 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-692926 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-692926 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-692926 -n no-preload-692926
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-692926 -n no-preload-692926: exit status 2 (345.087279ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-692926 -n no-preload-692926
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-692926 -n no-preload-692926: exit status 2 (354.259966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-692926 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-692926 -n no-preload-692926
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-692926 -n no-preload-692926
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-866330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E1216 20:35:56.743511    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-866330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (51.298632091s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bm7fp" [2e6a817d-1978-4446-8a77-da6cfb8d94ac] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004401204s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bm7fp" [2e6a817d-1978-4446-8a77-da6cfb8d94ac] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004919342s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-157979 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-157979 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-157979 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-157979 -n old-k8s-version-157979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-157979 -n old-k8s-version-157979: exit status 2 (399.955128ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-157979 -n old-k8s-version-157979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-157979 -n old-k8s-version-157979: exit status 2 (374.509513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-157979 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-157979 -n old-k8s-version-157979
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-157979 -n old-k8s-version-157979
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-879135 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-879135 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (45.51047297s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-866330 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72b66672-f890-4d5e-b7d1-c780259ffd15] Pending
helpers_test.go:344: "busybox" [72b66672-f890-4d5e-b7d1-c780259ffd15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72b66672-f890-4d5e-b7d1-c780259ffd15] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004614621s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-866330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-866330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-866330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187606797s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-866330 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-866330 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-866330 --alsologtostderr -v=3: (10.865393761s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-866330 -n embed-certs-866330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-866330 -n embed-certs-866330: exit status 7 (75.999034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-866330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-866330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-866330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (4m26.656439442s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-866330 -n embed-certs-866330
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-879135 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22ce0878-a530-478a-b2ce-3d5ad5561e71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22ce0878-a530-478a-b2ce-3d5ad5561e71] Running
E1216 20:37:19.809543    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.007850774s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-879135 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-879135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-879135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.329429579s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-879135 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-879135 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-879135 --alsologtostderr -v=3: (11.247129051s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135: exit status 7 (87.049433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-879135 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-879135 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E1216 20:38:51.430050    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.070864    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.077302    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.088760    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.107011    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.110499    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.151985    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.233410    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.395623    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:30.717201    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:31.359036    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:32.641366    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:35.202688    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:40.324664    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:47.036670    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/functional-690644/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:39:50.566983    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:11.049112    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:41.465299    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:41.472018    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:41.483499    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:41.504875    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:41.546216    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:41.627624    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:41.789309    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:42.111444    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:42.753342    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:44.034613    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:46.596756    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:51.719111    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:52.014122    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:40:56.744546    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:41:01.960578    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:41:22.442352    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-879135 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (4m27.736544445s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8v6t5" [3dc242f5-689d-4897-829b-592868037952] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010699625s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8v6t5" [3dc242f5-689d-4897-829b-592868037952] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003848572s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-866330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-866330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-866330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-866330 -n embed-certs-866330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-866330 -n embed-certs-866330: exit status 2 (345.132972ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-866330 -n embed-certs-866330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-866330 -n embed-certs-866330: exit status 2 (352.421482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-866330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-866330 -n embed-certs-866330
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-866330 -n embed-certs-866330
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-778791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E1216 20:42:03.404439    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-778791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (39.522799464s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5vmkc" [60b11df5-2b17-4106-807e-bd35c0628c02] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004338911s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5vmkc" [60b11df5-2b17-4106-807e-bd35c0628c02] Running
E1216 20:42:13.936204    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003576445s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-879135 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-879135 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-879135 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135: exit status 2 (422.102925ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135: exit status 2 (386.187521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-879135 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-879135 -n default-k8s-diff-port-879135
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (52.67431764s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-778791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-778791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.493372817s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-778791 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-778791 --alsologtostderr -v=3: (8.189531264s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-778791 -n newest-cni-778791
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-778791 -n newest-cni-778791: exit status 7 (135.234489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-778791 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-778791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-778791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (26.884228466s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-778791 -n newest-cni-778791
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-778791 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-778791 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-778791 -n newest-cni-778791
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-778791 -n newest-cni-778791: exit status 2 (375.206043ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-778791 -n newest-cni-778791
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-778791 -n newest-cni-778791: exit status 2 (373.605173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-778791 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-778791 -n newest-cni-778791
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-778791 -n newest-cni-778791
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.77s)
E1216 20:49:57.428947    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/false-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:58.287027    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/default-k8s-diff-port-879135/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:58.710416    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/false-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:50:01.271739    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/false-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:50:06.393680    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/false-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:50:14.500993    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:50:16.635638    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/false-204928/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (57.8633738s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-204928 "pgrep -a kubelet"
I1216 20:43:17.848479    7569 config.go:182] Loaded profile config "auto-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l94q4" [4800999d-8972-4d59-81e1-7c5e4c1a0314] Pending
helpers_test.go:344: "netcat-5d86dc444-l94q4" [4800999d-8972-4d59-81e1-7c5e4c1a0314] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 20:43:25.326459    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-l94q4" [4800999d-8972-4d59-81e1-7c5e4c1a0314] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.006446934s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (58.000101796s)
--- PASS: TestNetworkPlugins/group/false/Start (58.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-204928 "pgrep -a kubelet"
I1216 20:44:08.609740    7569 config.go:182] Loaded profile config "custom-flannel-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dj62p" [6c253f48-c73a-4b79-83b6-8e9b3086e4a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dj62p" [6c253f48-c73a-4b79-83b6-8e9b3086e4a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004609625s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m13.465021322s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-204928 "pgrep -a kubelet"
I1216 20:44:55.807755    7569 config.go:182] Loaded profile config "false-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9v9w8" [b5fb46b9-3bef-4588-911e-f56f144c9e2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 20:44:57.777495    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-9v9w8" [b5fb46b9-3bef-4588-911e-f56f144c9e2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004755046s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E1216 20:45:41.464714    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:45:56.744038    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/skaffold-812012/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (57.972119735s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vdfk9" [0d203658-03ec-4032-aad6-83278b6700e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004566214s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-204928 "pgrep -a kubelet"
I1216 20:46:07.907920    7569 config.go:182] Loaded profile config "kindnet-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ndb9v" [79a8d03d-96f2-4d94-b5e9-abac0bf5a961] Pending
E1216 20:46:09.167901    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-ndb9v" [79a8d03d-96f2-4d94-b5e9-abac0bf5a961] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ndb9v" [79a8d03d-96f2-4d94-b5e9-abac0bf5a961] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.004023062s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rcnnt" [58fc6f29-e0e9-462b-bf3f-fa0ebcfc3184] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004480093s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-204928 "pgrep -a kubelet"
I1216 20:46:38.677584    7569 config.go:182] Loaded profile config "flannel-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fb6ng" [28711062-6bfb-42bb-8a28-8b3daf95951c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fb6ng" [28711062-6bfb-42bb-8a28-8b3daf95951c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004180652s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (56.978066788s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1216 20:47:19.558627    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/default-k8s-diff-port-879135/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:47:24.680374    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/default-k8s-diff-port-879135/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:47:34.921899    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/default-k8s-diff-port-879135/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m18.551276594s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-204928 "pgrep -a kubelet"
I1216 20:47:43.473048    7569 config.go:182] Loaded profile config "enable-default-cni-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m7798" [d581333b-251d-4b82-a034-31a56094d780] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-m7798" [d581333b-251d-4b82-a034-31a56094d780] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00579018s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (40.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1216 20:48:18.164436    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:18.170799    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:18.182186    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:18.203541    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:18.244886    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:18.326231    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:18.487730    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:18.809665    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:19.451352    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:20.732962    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:23.295045    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:28.416679    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:48:36.364931    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/default-k8s-diff-port-879135/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (40.901445033s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (40.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-204928 "pgrep -a kubelet"
I1216 20:48:37.417848    7569 config.go:182] Loaded profile config "bridge-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xrj4q" [145980a6-404f-4b2b-9e67-caabe96e7af4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 20:48:38.658048    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-xrj4q" [145980a6-404f-4b2b-9e67-caabe96e7af4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.003017542s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1216 20:48:51.430159    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/addons-309585/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-204928 "pgrep -a kubelet"
I1216 20:48:57.654132    7569 config.go:182] Loaded profile config "kubenet-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qbz5l" [fc5030d9-5aae-4c8d-858c-c16395bfb77e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 20:48:59.140332    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/auto-204928/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-qbz5l" [fc5030d9-5aae-4c8d-858c-c16395bfb77e] Running
E1216 20:49:08.970001    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:08.976361    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:08.989929    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:09.011469    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:09.052742    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004128968s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (21.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-204928 exec deployment/netcat -- nslookup kubernetes.default
E1216 20:49:09.134274    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:09.295696    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:09.617006    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:10.259379    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-204928 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.278658806s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 20:49:24.337070    7569 retry.go:31] will retry after 1.460086243s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context kubenet-204928 exec deployment/netcat -- nslookup kubernetes.default
E1216 20:49:29.468679    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:30.070665    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/old-k8s-version-157979/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context kubenet-204928 exec deployment/netcat -- nslookup kubernetes.default: (5.219779452s)
--- PASS: TestNetworkPlugins/group/kubenet/DNS (21.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E1216 20:49:14.104739    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:49:19.226711    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-204928 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m16.664876186s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zprtp" [12881397-d24c-4c08-8c1b-9572e168de6a] Running
E1216 20:50:30.912936    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/custom-flannel-204928/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005081613s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-204928 "pgrep -a kubelet"
I1216 20:50:36.615991    7569 config.go:182] Loaded profile config "calico-204928": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-204928 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bwc95" [411d2286-f205-46d9-af9b-ad5c433b5289] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 20:50:37.117062    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/false-204928/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:50:41.465298    7569 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/no-preload-692926/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-bwc95" [411d2286-f205-46d9-af9b-ad5c433b5289] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004405543s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-204928 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-204928 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    

Test skip (25/345)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-036021 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-036021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-036021
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-009110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-009110
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-204928 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-204928" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20091-2258/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-223057
contexts:
- context:
cluster: NoKubernetes-223057
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:24:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-223057
name: NoKubernetes-223057
current-context: NoKubernetes-223057
kind: Config
preferences: {}
users:
- name: NoKubernetes-223057
user:
client-certificate: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/NoKubernetes-223057/client.crt
client-key: /home/jenkins/minikube-integration/20091-2258/.minikube/profiles/NoKubernetes-223057/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-204928

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-204928" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204928"

                                                
                                                
----------------------- debugLogs end: cilium-204928 [took: 4.650354983s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-204928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-204928
--- SKIP: TestNetworkPlugins/group/cilium (4.81s)

                                                
                                    
Copied to clipboard